I am trying to figure out the best way to organize a bunch of Ruby scripts to make it easier on the next person. One key thing is that there are multiple constant variables that need to be used across all scripts. Where should these be stored? Do I keep a separate file for these constants? Should I use YAML? I've never had to create a project with multiple Ruby source files interacting with each other, so I'm not sure as to what the best method of approach is here.
Thanks for the help.
I like to use a config.yaml file for all my constants. This makes it easy to set and change variables that are going to be used across different files. Then all you need to do is read in the file and set the variables. You can keep this file anywhere really, so long as anyone using the file has read permissions. All you have to do then is set the file path.
Hope this helps.
I like to do a config.yml or settings.yml, but I also allow the variables defined in config.yml to be overloadable by ENV variables (might be overkill in your situation).
It's might also be a good idea to set some defaults in your config loading/setting code.
As far as common functions/methods go... common.rb is a pretty good name or maybe shared.rb.
Related
I have a situation that I had to sync my array with language files, so every time I had to generate & translate it.
I was looking for a package like laravel-langman it has an option to sync. But now that I am looking, it doesn't allow me to create a key with the value using artisan commend directly without asking for input.
Any Help will be appreciated.
You should check out this page maybe, it mentions multiple packages that solve your problem. We currently use a combination of 2 packages. I think the first one has what you want.
We use 2 packages to solve this issue, one is for the basic translations that don't get added dynamically, for this we used: waavi/translation
Now you still need it working for dynamically created or removed translations which you need if you want your models to contain multi language descriptions or something similar. For this we used: dimsav/laravel-translatable
With both of those you are all set, but you can also see if you like another package over the ones i listed.
I am wondering if it is possible to accomplish the following, given some context and example.
I have files in "Server\Share\Folder\File##.ext"
Sometimes the "File##.ext" can be "File01.ext" through "File20.ext", and other times it may be "File01.ext" through "File40.ext"
Sometimes there are less of these files, sometimes there are more.
I want a batch file to take the files from "Server\Share\Folder\File##.ext" and move them to "Server\Share\OtherFolder\File##.ext". I know I can accomplish this easily with:
copy /y "Server\Share\Folder\File01.ext" "Server\Share\OtherFolder\File01.ext"
Then just add another line for each extra "File02.ext, File03.ext, etc., but I am wondering if it is possible to make it so that any file that resembles "File##.ext" can be included, so that no matter how many ## I have, it always works without issue.
Thanks in advance for any and all advice!
EDIT
Someone mentioned using Wildcards, but my question with that is - lets say those files are File01.ext through File05.ext, will it match what it finds to the newly moved file? Like will it find File01 from File?? on the source and Make it File01 from File?? at the destination?
You can accomplish this task with a FORloop program in batch-file.
You can also loop through the Commands using : and variable name.
Combining these two would help you get what you want.
We can help you with Ideas and little bit of the coding. But the Efforts must be done by you. So U can learn programming better
I am working on a module that supplies methods for navigating directories and manipulating files. Basically it will be a combination of the Dir and File classes, with options specific to the needs of a project I'm working on.
Right now I have started writing tests for some of these methods and things are getting messy.
Example
One of the methods I have is a tree function that returns a hash of files and folders where you can pass options like tree(only: 'folders', limit: 3). In order to test that it only goes down 3 levels, I would have to have 4+ subfolders with dummy files in them.
The Problem
Right now I'm testing on folders outside the project since the subfolders are already there, but I want to move away from this, especially considering the implausibility of testing on system files once I start testing methods equivalent to rm -rf (as well as the lack of portability).
I'm starting to think that I need to create a "lab rat" type folder that I do all my "experiments" on, but I have no clue how to approach creating it.
Do I create a function that creates the files?
Do I pull files and folders from another location?
Do I use some sort of "lorem ipsum" generator for file structures?
Do I make all these files and folders manually(ugh)?
Do I just mock and stub the hell out of everything and not actually create/delete the files and folders?(I don't see this happening)
So...
How would someone normally approach testing excessive amounts of file and folder manipulation?
I don't think you want to use mocks/stubs. The file system of your OS should be well tested and fast, so the benefit of mocks/stubs is minimal. Creating a mock/stub system increases the complexity without much benefit.
Here's my answers:
Do I create a function that creates the files?
Yes. You can create tests for these functions to make sure that they are correct. Instead of calling Dir and File, write helper functions that make the code simple and readable. Maybe you can share the helper functions between the source/test code...
Do I pull files and folders from another location?
Not sure what this is for...
Do I use some sort of "lorem ipsum" generator for file structures?
Yes, if you mean create functions that generate file structures.
Do I make all these files and folders manually(ugh)?
No.
Do I just mock and stub the hell out of everything and not actually create/delete the files and folders?(I don't see this happening)
No. One benefit of creating files/directories is that you can manually check what is going on and not be 100% dependent on the tests. This is actually a good approach because without it there could be a bug where both the source code and test code is not doing what you expect, but you wouldn't know because everything seems to be working.
Instead of putting all the i18n resources into a single message file, I want to divide them into several files. Anybody can kindly tell me how can I do that? because the documentation of Play doesn't give me any idea.
Use the Messages Module, really nice for this purpose.
You can't on Play. Anyway, why would you need to do that? Makes harder to find where the keys for I18N are. If it's for "visual" purposes, just use comments (##) to create sections in the file.
I've created a php file called pagebase.php that I'm quite proud of. It contains a class that created the whole html file for me from input such as css links and js links.
In any case, this file is several hundred lines long, as it includes several helper functions such as cleanHTML() that removes all whitespace from the html code then, in layman's terms, makes the source look pritty.
I have decided to use this pagebase in all my projects, particularly in all my internal projects. I also plan to add and expand to the pagebase file quite a lot. So what I'm wondering is if it's possible to set the allow_url_include option to on, but just on this one single file.
If I got my theory right, that would allow me to include() that file from any server and get the pagebase class.
So what I'm wondering is if it's possible to set the allow_url_include option to on, but just on this one single file.
No, as far as I'm aware this is not possible.
What you are planning to do sounds like a bad idea anyway, though. An include that gets loaded over the web on every request is awful for performance.
You should keep local copies of your library, and use a update script (or version control system) to keep versions up to date.
That is a bad practice.
You should put this file along with the project that needs it and locally include() it.
10 years and still no good solutions? Use this first one with allow_url_fopen for any convenient solution for your PHP file to allow the use of the php.ini brackets.
Replace allow_url_fopen in my example with allow_url_include to address the question.
This will surely be stated as a minor problem in future PHP. Especially if things aren't set up right with at least options are enabled globally. And this only works if there are no global php.ini rules.
Number 1 and 0 in code is either on and off.
<?php
echo ini_get('allow_url_fopen');
if (!ini_get('allow_url_fopen')) {
ini_set('allow_url_fopen', '1');
}
echo ini_get('allow_url_fopen');
Your code of use of either fopen() or copy() in between the code. Even some curl_init() might work.
echo ini_get('allow_url_fopen');
if (!ini_get('allow_url_fopen')) {
ini_set('allow_url_fopen', '0');
}
echo ini_get('allow_url_fopen');
}
?>
This also works for several other php.ini rules. I believe that this proof would be a security issue in many PHP codes further into the future. Surely a good thing to monitor in plugins in the further future or now for anti-malware signature. At least use the right global settings from now.