I am working on a task that needs to checkout source from a github repositroy, and then modify some files from the checked out repository based on some existing configuration data that's coming from a separate call to a different web service as JSON. The changes to the checked out code are temporary and will not be pushed back to github.
Once the checked out source is processed and modified based upon the configuration data, I will create a compressed archive of the resulting source.
I just discovered Capistrano and it seems great for this entire process, although it has nothing to do with deployment. On the other hand, I could simply use plain Ruby to do the same stuff. Currently, I am weighing more on the side of using Capistrano with custom tasks.
So you can say that it's an app based on Capistrano itself, with local deployment. Does it sound like a sane approach? Should I write it in plain Ruby instead? Or maybe write parts of the application in pure Ruby, and connect the pieces with Capistrano. Any suggestion is welcome.
Sincerely recommend Thor (see Github) it's pure-ruby syntax tax framework like Rake (but like Capistrano has a lot of cruft for server cluster grouping and connection handling… Rake has a lot to do with more classical "Make" or build tasks)
Recommendation from me is a set of Thor tasks, using raw-net-ssh (cap is based on Net::SSH) where appropriate.
For the checking out I recommend you watch the "Amp" project… they're coming up with a consistent cross-scm way to do checkouts (but thats the least of your problems) - You can take a look here, but it's early days for them yet - http://github.com/michaeledgar/amp
Sources: (as the Capistrano maintainer, i'm planning on throwing out our own DSL to replace it with Thor since it makes a lot more sense )
As for me, I write things like these in a Rakefile, and then use a rake command to call them.
You can find that Rakefiles are similar to Capfiles, so rake is usually used to perform some local tasks, and cap for remote.
Related
I'm going to write a service that will using amqp protocol, without http at all. I like hanami's paradigm of repository-entity-model-interactors and I wonder to use those in my project. Generating all that stuff by hand, sure, is boring.
So, I wonder to grab rake tasks. Looking into config/environment etc, ughhhh. What is the best method, shortly, to use those tools without hanami router and controllers? Or, it is all integrated tightly?
As I think for that moment, there are two ways:
a) To include only hanami-model into my Gemfile, then copy by hand every needed file from gem hanami.
b) To create hanami project and do not use rackup.
I'm disappointed.
Alternatively, you can add hanami as a development gem. That gives you access to the code generators. At the deploy stage, you don't bundle hanami, so the app will only have hanami-model and hanami-utils in production.
hello. If I understand you right, you want to use interactors only with models. Interactors you can use as a regular ruby library.
For model, you need to configure all this staff and load to memory. You can check the example from our playbook. Hope it'll be helpful for you
https://github.com/hanami/playbook/blob/master/development/bug_templates/model_psql.rb
I'm using EngineYard for my production system. My deployment has Ruby 1.9.3p392. I develop on Ruby 1.9.3p429.
I get notifications from a 3rd party server that contains large XML files (larger than 10K anyway).
After a new deployment, for some reason, all of my notifications from this party are FAILING because the XML is greater than the 10K limit.
So on my dev instance I added the following line to application.rb:
REXML.entity_expansion_text_limit=102400
But that makes my deployment fail. So I look around and try another iteration:
REXML::Document.entity_expansion_text_limit=102400
Nope, that particular version of Ruby has no idea what I'm talking about.
What can I do to overcome this 10K default?
For some reason, I REXML::Document needs to be required on EngineYard. Here's what I did to fix my deployment.
In application.rb:
require 'rexml/document'
REXML::Document.entity_expansion_text_limit=102400
That appears to have done it.
I am very new to ruby and I wonder, Is it possible to have my ruby script deployed on a server?
Or I should have to use Rails?
As I can understood that Rails is not part of the core Ruby lang, and Ruby have server functionality even without Rails. (as in Java, PHP, etc..)
EDIT:
I have a Ruby script - acts as a cmd-line passed program - and I want to deploy it to an external (or even internal) server the way CGI scripts/programs used to do.
Yes, you can deploy any Ruby application, not just Rails apps obviously. Take a look at Capistrano.
Deployment and serving are two different things however. If you're looking for Ruby HTTP servers look at Unicorn, Thin, WEBrick, Puma.
If you want a fully-fledged solution try Heroku which handles both the deployment and web serving parts.
There are many tools to deploy Ruby projects, but you can do it pretty much manually.
I also found it very hard to find an easy-to-go solution and I think this is a very annoying gap in RoR framework.
I've been working in a solution to deploy a project to a server using Git, like the Heroku toolbelt (google it, is a really nice tool). The main concept is: you use Git to push your project and the server does everything else! Here you can see my project: https://github.com/sentient06/RDH/.
But please, don't focus on that. Instead, read the way I came to all information in the wiki: https://github.com/sentient06/RDH/wiki.
It is a bit outdated, but I can summarize here to you:
First, setup your server. This is the most boring part, you must setup all configuration, security measures, remote access, etc, etc.
If you don't have a server, you can hire one specially for RoR applications. There are a few good out there and each has a different deployment workflow. But supposing you decide o setup yourself:
I suggest you have any Linux or Unix system, server version. Then install Ruby Version Manager, then Ruby and then Rails. Then install a server application. I suggest Thin, but lots of people use Unicorn or Apache or other servers. Dig a little bit on the internet, find an easy to use solution. If you do not use Apache, though, you will need a "reverse proxy" too, so you can redirect all requests on ports 80, 8080, etc, to your applications. I suggest Nginx (I don't like Apache, I think is too overkill).
Now, everything done, the deploy process can be done more or less like this:
1 - Commit everything in a way your files are updated in the server;
2 - In the server, cd to the directory of your application and execute these commands:
$ bundle package
$ bundle install --deployment
$ RAILS_ENV=production rake db:migrate
$ rake assets:precompile
3 - Restart the server and, if necessary, the reverse proxy.
Dig on the internet to understand each command. These will pretty much force your application into production mode, reduce the space used by your javascript and CSS, migrate your production database and install the bundles. Production RoR is not so different from development RoR, it is just more compact and faster.
I do hope these informations are useful.
Good luck!
Update:
I forgot to mention, check ruby-toolbox, it has some really useful statistics and information on how often Rails technologies are being updated. They have many categories, this one is on deployment automation, give it a look: https://www.ruby-toolbox.com/categories/deployment_automation.
Cheers!
I have a ruby project written purely in ruby. Now I want to include a java archive (jar) file which has some functionality my users want. It is good to just place the file in one of the directories and bundle as a gem? Are there any security issues related to this? Any advice would be greatly appreciated.
The answer is that it depends on the use case.
If this is a gem that users will be using purely for their own purposes, and it's not broadcasting over a network, then security issues are fairly minimal - they would relate more to system security.
If part of your program involves binding to a port and accepting TCP/UDP connections then you've got to really start thinking about network security. Another possible problem is if you're giving file system access to non-privileged users (e.g. if this is a rails gem, and the JAR gives functionality to manipulate the file system and for some reason you're passing this on to the site users - bit of a stupid example but I hope you see what I'm getting at).
However, as for running a java JAR file, there's nothing innately insecure about that unless there are known security flaws with that particular JAR.
In the end, it's up to the end-user of the gem. Make it clear what the gem does and they can make the decision about whether they want to use it.
I'm about to begin a reasonable pure ruby project becoming from a Java and C background, and with some experience with Rails.
I'm looking for some advices with what's the best packaging/arrangement practice for a distributed ruby application that basically consists in Client app and the Server app.
The client only talks with the server to receive/send objects (json, and others) and to upload and download files, all from network. The server will deal with the local or remote storing of all files and store simple information (db).
I already read a lot about these and I found and know the best practices for a simple gem, like:
- appname/
- bin/
- lib/
- appname.rb
- appname/
- (appname::classes)
- test/
- readme, etc
But what about a reasonable big client-server app like these (2 app's in the same project)?
It's best/common to suit that in two gems? Or make them in same gem in different modules?
Do you know some ruby open source project/gem with a structure like these (client and server app) that I can go and see it's choices?
Sorry for the question size, i'm looking for this so I can define a good structure right now and avoid problems when the code begins to grow.
The best example that comes to my mind at this moment is picky. It's a very well-done project. It's worth taking a look at it for inspiration.