I've got a Sinatra app and I want to start doing development and production in Docker. Seeing blogs like this: https://blog.codeship.com/build-minimal-docker-container-ruby-apps/ which advocate using slimmed-down Alpine image. I understand the value of having a smaller image, however in my research on this subject I can't find a clear explanation on exactly how to know whether I will need, now or in the future, the full docker ruby image? Do I start with Alpine for dev and prod, and then assume that if I later need the bigger image, I'll switch to that? What would be an example of the kind of new requirement that could come up that would require the larger docker ruby image? thanks
Well it depends of your project's dependencies and of your workflow with your project. And so far you're the only one knowing that here
You can still install some though during the building process later with a Dockerfile, like the Ruby Package Manager (rpm) (which would probably be already present in many images though) and some ruby packages that wouldn't necessarly be present even if you use a larger ruby base image (therefore, a larger image wouldn't help for these dependencies, you'd still have to install them).
Other services (like databases for instance) could (and should) easily be handled by other containers such as mysql and therefore wouldn't require to expand your ruby image.
That question is very broad, therefore it's difficult to give a very specific answer.
Decided to use alpine and see where it leads based upon the response.
Related
I have read through Q&A/articles that explain the ideal structure of a Ruby project. I read the RubyGems guides on how to create a Ruby gem. I have just read a Q&A asking at what point a ruby project becomes a ruby gem but I can not for the life of me see the difference between the two. The structure seems to be the same. The files, where they go, everything looks the same to me. Is it how they're used? Can someone please explain the difference between the two to me?
The question that must be answered respect to 'Gemify' or not is: am I writing something that is readily reusable in a different context? If the answer is yes then your application is a candidate for 'Gemification'. If not then generally it is not worth the additional complexity to convert a Ruby project into a Gem.
For example. If one makes a CLI Ruby application that collects mortgage rates from multiple vendors and updates a database then there are two ways this could be converted into a gem.
First: You could generalise the interface/configuration and make it useful as a plugin/add-on/extension to projects written by someone needing the same or similar functionality. So someone could add the gemified version to their project and use it to do the grunt work for them and just make use of the results. This describes the most common use case for gems.
Second: However, you could also extract the framework of your CLI project layout into a generator gem for others to easily create their own CLI project layouts. This is how Rails came to be.
I'm using Octopress as my blog engine. It's perfect. But if there are many posts, for example 400+ posts, the speed of generation is soooo slow.
So, is there any way to speed up Jekyll/Octopress generation?
Thanks.
Obviously if you are just working on one post, there is no need to wait for the entire site to generate. What you are looking for is the rake isolate[partial_post_name] task.
Using rake isolate, you can “isolate” only that post you are working on and move all the others to the source/_stash folder. The partial_post_name parameter is just some words in the file name for the post. For example, if I want to isolate the post from the earlier example, I would use
rake isolate[plain-english]
This will move all the other posts to source/_stash and only keep the 2011-09-29-just-type-the-title-of-the-post-here-in-plain-english.markdown post in source/_posts. You can also do this while you are running rake preview. It will just detect a massive change and only regenerate that one post from then on.
by #Pavan Podila
More Info: Tips for Speeding Up Octopress Site Generation
2013.01.08 update:
Hexo--A fast, simple & powerful blog framework, powered by Node.js.
Features:Incredibly fast - generate static files in a glance
2013.6.20 update:
gor -- A static websites and blog generator engine written in Go
gor has following awesome benefits: 1. Speed -- Less than 1 second when compiling all my near 200 blogs on wendal.net 2. Simple -- Only one single executable file generated after compiling, no other dependence
Install Ruby GSL
gem install gsl
You should notice a speed increase.
hexo powered by Node.js. I am using it, much faster than Octopress. And it provides a simple way to migrate your articles to hexo very easily.
You can generate only one post while you are writing it using
rake isolate[your-post]
and then
rake integrate
to go back to normal.
To fully answer your question, you can't generate only one post. You can see Octopress' Issue #395 on that subject, which explains that this is due to a limitation on Jekyll's side.
Reached this post with the same problem, but then did not quite like the idea of rake isolate. Also the inbuilt task does not integrate with the _drafts workflow.
So what I ended up using is to create a custom config.yml with the _posts folder excluded (using exclude) and have only the drafts folder built. You can pass in a different config file as command line parameter to jekyll. I just used this when actively writing new posts and while publish use the same old approach (which still does take some time). This approach builds only the draft post and I am good with that.
jekyll build --watch --drafts --config _previewconfig.yml
For those interested in the complete worklow take a look here
If your blog has a lot of images (and other static assets that do not change between builds), it is worthwhile to exclude them from Jekyll's build process, and instead manually update them as needed.
For whatever reason, Jekyll build is not intelligent when it comes to handling such assets. It will delete everything in the public folder, and re-copy the contents in source every time you build. This is wasteful if the assets haven't changed. This can be avoided by using a tool such as Robocopy (Windows) or Rsync (Linux) that is able to update only what has changed.
To tell Jekyll to ignore a folder, add the following to _config.yml:
exclude: # exclude from build
- folderPath
keep-files: # do not delete/empty copy in `public`
- folderPath
Then elsewhere, use whatever tool you want to update the folder.
For more things you can try, see this post.
I want to use Node.js as a Share.js server and Ruby for the frontend. As far as I know, Heroku only allows one web-facing process called "web". Does anyone have some experience trying to do something like this?
No, Heroku detects the application type when you push your code to Heroku and it compiles the slug. You'd need to have them as seperate applications with a defined API between the two (not always a bad thing)
UPDATE: You can 'stack' buildpacks these days, eg Ruby + PHP so you could have both executed. See https://devcenter.heroku.com/articles/using-multiple-buildpacks-for-an-app for how to use multiple buildpacks in the same app.
As a caveat, you technically can install two languages on a single app — but I'm not sure about running them concurrently. I made this buildpack to combine NodeJS and PHP (so that I could run Grunt during the slug compilation):
https://github.com/gcpantazis/heroku-buildpack-php-gruntjs
The language detection is usually fairly dumb; it'll be looking for a file indicative of the lang, i.e. index.php or a rakefile. You'll have to change the detect bin so that your code will pass.
Update:
Even better, consider using https://github.com/ddollar/heroku-buildpack-multi ; it'll let you install buildpacks sequentially. Depending on your application you might need to find language buildpacks that don't have verification steps, i.e. checking for a package.json file in a NodeJS app.
Yes, it is mostly possible I believe, as long as you're not doing something very tricky. I once deployed a Flask (Python) app that used Stanford's CoreNLP - which is all written in Java. You will need heroku-buildpack-multi.
After adding this, make sure to make a .buildbacks file and add all the buildpacks you will be needing from the Heroku github.
This circumvents Heroku detecting your app type itself and makes it install all necessary buildpacks from the .buildpacks file.
I've written a sprockets processor for losslessly compressing jpgs and pngs, you check it out it here: https://github.com/botandrose/sprockets-image_compressor
However, I can't use this gem on heroku, because the jpegoptim and pngcrush programs aren't available within their environment. Furthermore, users of the gem will need to remember to install these programs on every system they want to use my gem. So, I think it'd be nice if I could vendor in these binaries as a fallback if the system doesn't have them installed already.
So, is this totally crazy? Would I need to provide a 64bit binary as well as a 32bit? Would I still require some sort of external library to be installed? Would I be better off writing some sort of C extension that hooks into these programs?
I haven't seen many gems in the wild that do this sort of thing. One other option, however, is to provide rake tasks that go out and fetch the programs if they aren't already installed on the machine. It might be tricky to make it work for all the different platforms, though.
With regard to using your gem on Heroku: remember that Heroku has a read-only filesystem (except for the /tmp directory), so running Sprockets processors like yours on Heroku isn't really a practicable option anyways. I personally just use rake assets:precompile and commit all the precompiled assets to my Git repo before pushing to Heroku. Yes, I know it messes up the repo history, but it's the easiest way to go (at least for now). Hopefully Heroku will come up with some other option in the future.
As far as the main question you asked, hopefully someone else can provide a good answer. Your project looks very cool; I am just going to try it out.
Me and my team are starting to build up a few re-usable scripts. They're re-usable within our org only as they work with proprietry apps and our particular server environment. So not really suitable for rubyforge or github, etc.
My question is, what is the best practice for ensuring we're all using the latest and greatest scripts across all users? We pretty much run these scripts on one server, but may need to expand to others.
Should we bundle them into gem(s) and start a private gem server?
Or something simpler like a common, shareable lib directory. Perhaps with a script to download/update from our SCM?
Other ideas?
Thanks....
This depends on some factors, like how many people want to change the code (only your team, or someone else too), or how much money you have for this?
Personally I'd create a build+gem server, where you can upload the scripts using some versioning system (like git or svn, depends on how many people are working on the project), and then create a cron job, that would automatically genereate the gems from the sources at generic intervals and store them as different versions. This way you can be sure that you always have an authorative server that stores your applications gems, and you can always get an earlier version if something breaks. Your script might create separate gem version names, like "appserv-edge" or "appserv-stable"
You might also want to check out github's closed source options too, if you have the money to afford that. I don't know however whether they have gem building and hosting facilities for non open-source programs.
I've created a private gemserver and it's dead easy. The only tricky bit is deciding how you want your users to upload gems. Personally, I just use a PHP upload form, and have it check to make sure that it's not masking any existing gems.
At my office we're using a bit of a hybrid approach for some of our shared scripts and libs. We do bundle them all in to a gem, but rather than using a gem server we keep them in source control, and then build the gem (using newgem) and install it locally as necessary.
The downside of this approach is that it takes two commands instead of one to install the gem, but this is largely mitigated in qa and production environments, since we use Capistrano for deployment.
The upsides are that it's dead simple, and in development there is a very short edit/build/deploy cycle if you're working with something that requires changes to the gem. I'm currently pulling a lot of common functionality in to the shared gem, so I'm really appreciating that aspect.