The summary: As far as I can tell, when I run srb init, it requires every single file. Is there a way to disable or customize this behavior before the config is generated in sorbet/?
I run into some trouble with this since my team keeps gems in a non-standard location (it's a polyglot monorepo.) In particular, I'd like to tell Sorbet to ignore things in _build, db, and script—short of adding a typed: ignore to every file (apparently this won't work for us due to how gems are being set up) how can I make Sorbet ignore these?
(some background: we tried to adopt Sorbet's static checks when it first came out and could not because we use Rails and the tooling was not working well enough yet. We found the runtime checks really useful, however, and so we've been using those extensively. I've been re-evaluating the static side every couple of months, and have been consistently gotten stuck when trying to create the sorbet directory!)
I believe you can start by creating and modifying the config file accordingly and then running the whole process. You can acomplish this by running srb rbi config first, and then adding a line like --ignore=db/ to the newly created file at ./sorbet/config.
For multiple directories you can put one on each line:
--dir
.
--ignore=db/
--ignore=vendor/
Related
I am making my own personal package to have collection of usefull programs and configs. Main idea is to emerge this package and have system prepared for my prefferencies. Mainly it works (it simply depends on all my favourite programs), but I have two problems here:
how to install USE flags, UNMASK and such before affected programs are installed?
how to uninstall it (emerge --unmerge does NOT delete files in /etc, so even after uninstalling the package the USE flags (and others) are still kept - my intent is to REMOVE them, so next rebuild of world would NOT use them anymore - yes it means a lot of programs would lose some functionalities like support for some languages, support for some other programs and so on, it is desired result)
My solutions so far are:
The package have some files in /etc/portage/package.*
1.1. I emerge that package with --nodeps (so the config files are installed)
1.2. I emerge it again without that flag (so dependencies are installed
with right configuration))
I create (and install) script to parse /var/db/packages for my package CONTENTS and delete all /etc/portage/something files "manually" and I have to rum this script before unmerging the package
Is there better way to do it ?
You just doing/understanding it wrong! (sorry :)
First of all, instead of a metapackage (an empty ebuild that have only runtime dependencies) there is other ways:
use sets to describe your preferred packages. Manage your USE flags in a usual way (including per package USE if needed).
medium complexity solution is to write a metapackage ebuild (your current case) -- but, you can't mask/unmask USE flags anyway…
if you already have your overlay (obviously) -- defining your own profile would solve everything! Here you can manage everything just like you want: mask/unmask any USE flags, define what is system predefined package means for you, & etc…
Unfortunately, I don't use Gentoo portage (and emerge) and have no idea if it's possible to have multiple additive profiles. I have my own profiles here and it works perfectly with Paludis.
Second, never remove any configuration files (config-protected) after uninstall! There is no packages that do that, and there is a bunch of reasons for that… The main one is that user may have them modified and don't want to loose his changes. Moreover, personally I prefer to have all configs that I've ever touched to be in a dedicated VCS repo -- it wouldn't be nice, if someone, except me, would remove smth…
Imagine a real life example: user wants to reinstall some package and he has a bunch of configuration files, he spent some time to carefully edit them. Trivial way is to uninstall and then install again -- Oops! He lost his configs!
Moreover, from ebuild's POV, you have pkg_prerm and pkg_postrm functions, but both of them are called even at upgrade time (i.e. when unmerge followed by immediate merge phase). You have to be really careful to distinct that use cases… And what is more scare, having any "hardcoded" (and unique) rules in any package, you don't have any influence on them…
So, please, never remove any config protected files, let the user to take care of them (he is the boss, not a package manager)…
Update: If you really want to be able to remove some config-protected files, setting up your own profile looks even more better idea. You can set CONFIG_PROTECT_MASK to enforce unprotect files and/or directories. In that way you don't need to modify any ebuilds and/or write an ugly cleanup code.
I am dealing with huge code base where features are grouped by domain in and are kept in separate packages.
+ServicesDomain
|---+features
|+step_definitions
+SalesDomain
|---+features
|+step_definitions
But there are always some common steps and I could not find a way to keep the common step definitions in some common steps package.
What I would like to know is that, if there is way to keep all the generic steps in some common package and make my domain package depend on generic steps package to leverage the generic steps.
One way of doing it would be to set up a file called features/support/env.rb and add a 'require' statement to this file to include your common steps each time. For instance, the file could contain:
require File.join(File.dirname(__FILE__), '..', 'common', 'common_steps')
That way your common_steps.rb would be loaded each time.
env.rb is the first file to be run on each Cucumber run.
Have you also heard of the Pickle gem? That might be worth a look as it does something similar by putting common steps into a file called 'pickle_steps.rb'. This comes with handy step definitions for testing models but there would be nothing stopping you editing that file and adding your own.
Hope that is of some help?
Cucumber has a command line option -r that allows you to include specific files. If you run cucumber --help, you can get more information about all the command line options. In ruby you can combine this with a config.yml file to setup globally how cucumber is run in the project. However I suspect you are using java, and I don't know if that applies. You could ask on the cucumber mailing list if thats the case.
An alternative would be to place a file in each support directory e.g. ServicesDomain/features/support and SalesDomian/features/support that just has a require statement that pulls in all the common steps. Cucumber automatically loads all the files in features/support (when it is run from features/..).
I've noticed that projects such as bundler do a require spec_helper in each spec file
I've also noticed that rspec takes the option --require, which allows you to require a file when rspec is bootstrapped. You can also add this to the .rspec file, so it is added whenever you run rspec with no arguments.
Are there any disadvantages to using the above method which might explain why projects such as bundler choose to require spec_helper in each spec file?
I don't work on Bundler so I can't speak directly about their practices. Not all projects check-in the .rspec file. The reason is this file, generally by current convention, only has personal configuration options for general output / runner preferences. So if you only required spec_helper there, others wouldn't load it, causing tests to fail.
Another reason, is not all tests may need the setup performed by spec_helper. More recently there have been groups of Rubyists who are trying to move away from loading too many dependencies into the test. By making it explicit when spec_helper is required in the test people have an idea what may be going on. Also, running a single test file or directory that doesn't need that setup will be faster.
In reality, if all of your tests are requiring spec_helper and you've make it a clear convention on the project there's no technical reason you can't or shouldn't do it. It just may be an initial surprise for new people who join the project.
With a proper setup, there's no downside at all.
The .rspec file is meant to be project related (and should be commited like any other project source file).
Meanwhile, the .rspec-local is for overriding with personalized settings (and it will let the user override some options only).
(see: https://www.relishapp.com/rspec/rspec-core/v/3-2/docs/configuration/read-command-line-configuration-options-from-files)
Rails projects even use a separate --require rails_helper for Rails-specific RSpec settings.
Both the .rspec and --require options have existed since 2011 at least (which is ages ago).
And, RSpec is especially not a tool for people needing training wheels - you want people to understand how RSpec works and what the options are, because you want people to know e.g. when and where to temporarily set the --seed option, how to change the formatter, switch on --fail-fast, use specific tags to work on a feature, etc.
The test environment also has to have a consistent configuration, so you do not want people tweaking the spec_helper file or the .rspec file (for the same reason).
In short: if it doesn't apply to every spec in the project, it shouldn't be in the spec_helper file. Which is why you should make sure it is included by every spec file. And the .rspec file is the best way to do that.
The only reason to not to switch to this is when the project is being actively maintained by many people (and any project wide change just creates annoyances by e.g. forcing people to rebase their work for no reason related to what they were working on).
Bundler fits into this category - to many people potentially working concurrently.
P.S. I also recommend using rspec --init on an empty project and checking out the generated config.
I'm playing around with a combination of Thin, Sinatra and Bundler. I'm trying to understand how I get Thin to include the path to my source code in the load path? I have looked for introductory tutorials to this setup, but none of them seem to answer my question.
Mucking around with the rackup file or Thin config file feels wrong. Assume I have a directory structure with something like:
bin/my-application-entry.rb # The entry point to my sinatra application
lib/myapp/mylibs.rb
thin/config.ru # rackup config
thin/dev.yaml # thin config
Gemfile # for my dependencies
The contents of the rackup file is essentially
require 'sinatra'
# I'd like to require 'my-application-entry' around here somewhere (I think?)
run Sinatra.application
I invoke the application with
thin -C thin/dev.yaml -R thin/config.ru start
I noticed that thin takes a command-line argument to require a specific library, but surely there is a better place where you can define all the load paths?
So my question is really, how do I tell thin/rack/bundler which directories to include? (such as bin/ and lib/)
Edit: For clarity, I'd really like to know how this is generally done with Thin specifically. I am reluctant to modify $: in my main application, but if I am forced to use $:, where is the best place (in a Thin/Rack/Sinatra context) to do so?
$: is a global variable describing the load path, represented as an array. You can add ".", then if you care, eliminate duplicates, as follows:
$:.unshift(".").uniq!
Or you can push it to the end of the list:
$:.push(".").uniq!
It's often omitted by default because it's a potential security hazard.
I'm wondering the same thing. I'll tell you what I'm doing and try to justify it. Perhaps in the process of writing this, I'll have figured something out.
I have decided, for now, to add to $LOAD_PATH in config.ru, because I'm using config.ru as the entry point of my application. I can get away with this because I intend to leave this file in the root of the project. I don't like making that assumption, but it seems pretty safe for now. Of course, I'll leave a comment saying "Ew. Assumes that this file is the only entry point of the application and that it will always be in the root of the project."
I chose this because I want to add to $LOAD_PATH as high up in the call stack as possible (Dependency Inversion Principle), but doing so in the commands that run the app (Procfile running thin in production; command line running shotgun in development) duplicates the directories that I want to add to the load path. Do I classify this duplication as essential (worth removing) or coincidental (worth duplicating)? For now, it looks like an essential aspect of the app, at least as I've decided to organise it, because the app integrates the various services into a single request/response/routing engine. Also, config.ru describes the app, so it makes sense to treat it as the app.
If the situation changes and I notice that I want a different $LOAD_PATH in development than in production, then I'll move the directories up the call stack into the command that runs config.ru.
I feel comfortable with this choice, not because I consider it "the right choice", but because I think I know how and why I'd change my mind. Good enough for me.
I hope this helps.
I would like to upload documents to GoogleDocs every time the OS hears that a file was added/dragged/saved in a designated folder, just the way DropBox uploads a file when you save it in the DropBox folder.
What would this take in Ruby, what are the parts?
How do you listen for when a File is Saved?
How do you listen for when a File is added to a Folder?
I understand how to use the GoogleDocs API and upload things once I get these events, but I'm not sure how this would work.
Update
While I still don't know how to check if a file is added to a directory, listening for when a file is saved is now dirt simple, thanks to Guard for ruby.
If I were faced with this, I would use something like git or bzr to handle the version checking and just call add then commit from your script and monitor which files have changed (and therefore need to be uploaded).
This adds the benefit of full version control over files and it's mostly cross platform (if you include binaries for each platform).
Note this doesn't handle your listening problem, just what you do when you know something has changed. You could schedule the task (via various routes) but I still like the idea of a proper VCS under the hood.
I just found this: http://www.codeforpeople.com/lib/ruby/dirwatch/
You'd need to read over it as I can't vouch for its efficiency or reliability. It appears to use SQLite, so it might be better just to manually check once every 10 seconds (or something along those lines).
Ruby doesn't include a built-in way to "listen" for updates to files. If you want to stick to pure Ruby, your best bet would be to perform the upload on a fixed schedule (say every 5 minutes) regardless of when the file is saved.
If this isn't an acceptable alternative, you could try writing the app (or at least certain parts of it) in Java, which does support this type of thing. Take a look at JRuby for integrating the Ruby and Java portions of your app.
Here is a pure ruby gem:
http://github.com/TwP/directory_watcher
I don't know the correct way of doing this, but a simple hack would be to have a script running in the background which checks the contents of a bunch of folders every n minutes and uses the associated timestamps to determine if the file was modified in that span of time
You would definitely need some native OS code here, to write the monitoring service/client. I'd select C++ if you want it to be cross platform. If you decide to go with .Net, for example, you can use the FileSystemWatcher class to achieve what you need (documentation and here's a related article).
Kind of an old thread, but I am faced with doing something similar and wanted to throw in my thoughts. The route I'm going is to have a ruby script that watches a given directory and checks the timestamps. Once all files have been uploaded, the script saves the latest timestamp and then polls the directory again, checking if any files/folders have been added. If files are found, then the script uploads them and updates the global timestamp, etc...
The downside is that setting up a ruby script to run continually (or as a service) is somewhat painful. But it's not an overwhelming task, just needs to be thought out properly.
Also depends on if your users are competent enough to have ruby installed or if you have to package everything up into a one-click installer as well. That, to me, is the hardest part to figure out.