Add CSS minifier to Sprockets - ruby

I have a web application which uses rack.
The code:
set :assets, (Sprockets::Environment.new { |env|
env.js_compressor = Uglifier.new({
:output => {
:preserve_line => true,
:bracketize => true,
:beautify => true,
:indent_level => 4,
:semicolons => true,
},
:mangle => false
})
env.append_path(APP_ROOT + "/app/assets/images")
env.append_path(APP_ROOT + "/app/assets/javascripts")
env.append_path(APP_ROOT + "/app/assets/stylesheets")
})
I now want to add a CSS minifier to it.
Can someone explain why only javascript files are taken into the JS compressor above?
Can I add something like env.css_compressor = YUI::CssCompressor.new() after the JS_compressor to get my requirement done
UPDATE: Well the second actually worked. But I have no clue how it worked :)

You hadn't set up the Sprockets::Environment.css_compressor variable, so there was no compressor available to run on text/css assets.
puts Sprockets::Environment.methods.inspect
#=> [...#css_compressor, #css_compressor=, #js_compressor, #js_compressor=,...]
To answer your question about how assets are loaded, yes the default is to point to one load path and you can as well manipulate that to include others.
https://github.com/sstephenson/sprockets
The load path is an ordered list of directories that Sprockets uses to
search for assets. To add a directory to your environment's load path, use the append_path and prepend_path methods.

Related

Puppet file resource require archive resource

I am using Puppet for doing my Vagrant provisioning. I used the archive module at https://forge.puppet.com/puppet/archive/types to download and extract glassfish like this:
archive { '/tmp/glassfish-4.1.1.zip':
ensure => present,
extract => true,
extract_path => '/opt/',
source => 'http://download.java.net/glassfish/4.1.1/release/glassfish-4.1.1.zip',
cleanup => true,
creates => '/opt/glassfish4',
}
After that resource is applied, I want to move a file into the newly created glassfish directory like this
file { 'domain.xml':
ensure => file,
path => '/opt/glassfish4/glassfish/domains/domain1/config/domain.xml',
source => 'puppet:///modules/glassfish/domain.xml',
}
I want to require in the file move resource that the extraction was already done, since the extraction is not creating a file, but rather a directory. Something like
require => FILE['..']
is not working.
You should add a require on the archive task so your file task will be something like
file { 'domain.xml':
ensure => file,
path => '/opt/glassfish4/glassfish/domains/domain1/config/domain.xml',
source => 'puppet:///modules/glassfish/domain.xml',
require => Archive['/tmp/glassfish-4.1.1.zip'],
}
so that the copy of the domain.xml file will be done after the archive task.

nconf.js like gem for ruby configuration

I'm building a cli tool in ruby, and I need to take config from different sources: environment variable, dotfile, arguments or hardcoded values. (with a precedence system)
In node.js I would have used nconf.js, to do this.
Is there some configuration gem in ruby that enable to do such a thing?
The actual answer is this:
updated: 2020-02-26
https://github.com/infochimps-labs/configliere
to quote the author:
Be willing to sit down with the Five Families. Takes settings from (at your option):
Pre-defined defaults from constants
Simple config files
Environment variables
Commandline options and git-style command runners
Ruby block (called when all other options are in place)
put simply. just like nconf.
require 'configliere'
Settings.use :commandline
Settings({
:dest_time => '11-05-1955',
:fluxcapacitor => {
:speed => 88,
},
:delorean => {
:power_source => 'plutonium',
:roads_needed => true,
},
:username => 'marty',
:password => '',
})
#set a value to possibly also come from env
Settings.define :dest_time, :env_var => 'DEST_TIME'
Settings.read "#{__dir__}/config.yml"
Settings.read "#{Dir.pwd()}/config.yml"
Settings.resolve!
old answer:
https://github.com/rubyconfig/config#working-with-environment-variables
it doesn't do argv, but it lets you layer various yaml files and then override with ENV just like nconf lets you.

Add image from URL to Excel with Axlsx

I'm using the (Axlsx gem and it's working great, but I need to add an image to a cell.
I know it can be done with an image file (see Adding image to Excel file generated by Axlsx.?), but I'm having a lot of trouble using our images stored in S3 (through Carrierwave).
Things I've tried:
# image.url = 'http://.../test.jpg'
ws.add_image(:image_src => image.url,:noSelect => true, :noMove => true) do |image|
# ArgumentError: File does not exist
or
ws.add_image(:image_src => image,:noSelect => true, :noMove => true) do |image|
# Invalid Data #<Object ...>
Not sure how to proceed
Try using read to pull the contents into a tempfile and use that location:
t = Tempfile.new('my_image')
t.binmode
t.write image.read
t.close
ws.add_image(:image_src => t.path, ...
To add an alternative answer for Paperclip & S3 as I couldn't find a reference for that besides this answer.
I'm using Rails 5.0.2 and Paperclip 4.3.1.
With image URLs like: http://s3.amazonaws.com/prod/accounts/logos/000/000/001/original/logo.jpg?87879987987987
#logo = #account.company_logo
if #logo.present?
#logo_image = Tempfile.new(['', ".#{#logo.url.split('.').last.split('?').first}"])
#logo_image.binmode # note that our tempfile must be in binary mode
#logo_image.write open(#logo.url).read
#logo_image.rewind
end
In the .xlsx file
sheet.add_image(image_src: #logo_image.path, noSelect: true, noMove: true, hyperlink: "#") do |image|...
Reference link: http://mensfeld.pl/tag/tempfile/ for more reading.
The .split('.').last.split('?').first is to get .jpg from logo.jpg? 87879987987987.

Nanoc + Bower = Error - Found 2 content files for

I'm using nanoc to generate an static site.
Recently I added Bower to manage front end dependencies.
When I add Bootstrap via Bower I place the package in /assets/bower/
The Bootstrap package contains multiple files, including:
bootstrap/js/tests/vendor/qunit.css
bootstrap/js/tests/vendor/qunit.js
My Rules file has these rules:
route '/assets/*' do
extension = item[:extension]
if extension == 'coffee'
extension = 'js'
end
item.identifier.chop + '.' + extension
end
compile '*', :rep => :spec do
if !item[:spec_files].nil? && !item.binary?
filter :erb
layout 'spec'
end
end
route '*', :rep => :spec do
if !item[:spec_files].nil? && !item.binary?
'/specs' + #item.identifier[0..-2] + '.html'
end
end
compile '*' do
if !item.binary?
filter :erb
layout_name = item[:layout] || 'default'
layout layout_name
end
end
route '*' do
if item.binary?
item.identifier.chop + '.' + item[:extension]
else
item.identifier[0..-2] + '.html'
end
end
When running nanoc I get the following error:
RuntimeError: Found 2 content files for
content/assets/bower/bootstrap/js/tests/vendor/qunit; expected 0 or 1
I tried adding 2 new 'empty' rules for the /assets/bower/ folder but still getting the error.
route '/assets/bower/*' do
end
compile '/assets/bower/*' do
end
Any suggestions?
Later edit:
Looks like nanoc supports a static datasource that also takes in consideration the file extension.
https://github.com/nanoc/nanoc-site/blob/master/content/docs/troubleshooting.md
Still not sure if I can use both data sources in parallel.
Unfortunately, you can't have two files in the same directory with the same name before the last extension. For nanoc 4.0 it'll be rewritten to change that.
You can definitely have multiple data sources used at once, but that means you can't apply filters to the qunit files, only redirect the output.
Do you explicitly have to be able to organise files the same as Bower installs them? It might be a better idea to split them up into scripts and styles if you can, anyway - you'll almost certainly be filtering based on filetype, anyway, and that means in Rules you can just go
compile '/whatever-path/scripts/' do
filter :concatenate
filter :uglify_js
end
rather than
compile '/whatever-path/ do
case item[:extension]
when 'js'
filter :uglify_js
when 'scss'
filter :sass
end
end

InstantCommons not working in MediaWiki 1.19 and SELinux

I am setting my own MediaWiki website locally, and am not able to get the InstantCommons feature to work (used to directly embed files from commons.wikimedia.org).
I get no error message, the files I try to load from Commons using the following syntax:
[[File:Cervus elaphus Luc Viatour 1.jpg|Cervus elaphus Luc Viatour 1]]
are just not loaded, and I end up with a red link on my page, referring to a non-existing file. It has been 2 days now that I am looking for a solution, but so far without any success.
I am running:
MediaWiki v.1.19.1
Fedora 16 (with SElinux)
PHP 5.3.15
MySQL Ver 14.14 Distrib 5.5.25a, for Linux (x86_64)
I have tried the following two configurations in my LocalSettings.php, without success:
$wgUseInstantCommons = true;
AND
$wgForeignFileRepos[] = array(
'class' => 'ForeignAPIRepo',
'name' => 'shared',
'apibase' => 'http://commons.wikimedia.org/w/api.php',
'fetchDescription' => true, // Optional
'descriptionCacheExpiry' => 43200, // 12 hours, optional (values are seconds)
'apiThumbCacheExpiry' => 43200, // 12 hours, optional, but required for local thumb caching
);
Any suggestion is most welcome.
OK, this is not (yet) an answer, but a debugging suggestion. It looks to me like the HTTP request from your server to Commons is failing for some reason, but unfortunately ForeignAPIRepo doesn't indicate the cause of the error in any way.
This is really a bug in MediaWiki, and should be fixed, but in the mean time, could you please try applying the following diff (or just manually adding the line marked with the + sign) to your includes/filerepo/ForeignAPIRepo.php file:
Index: includes/filerepo/ForeignAPIRepo.php
===================================================================
--- includes/filerepo/ForeignAPIRepo.php (revision 97048)
+++ includes/filerepo/ForeignAPIRepo.php (working copy)
## -385,6 +385,7 ##
if ( $status->isOK() ) {
return $req->getContent();
} else {
+ wfDebug( "ForeignAPIRepo: HTTP GET failed: " . $status->getXML() );
return false;
}
}
After applying it, try loading the file description page for a Commons image and look at the MediaWiki debug log. There should now be a line starting with ForeignAPIRepo: HTTP GET failed: followed by a few lines of XML error dump. That error data should hopefully indicate what's going wrong; please copy and paste it here.
Mine is not a definitive answer either. Referring to Ilmari Karonen's post, I was unable to find or get the getXML() method to execute for my version of Mediawiki v1.23.0. I was looking at the reference documentation found here to try and find any other method calls on the Status class that would give me good troubleshooting info. I ended up finding the following and editing the same file as mentioned in Ilmari Karonen's post includes/filerepo/ForeignAPIRepo.php beginning at line #521:
if ( $status->isOK() ) {
return $req->getContent();
} else {
$error = $status->getErrorsArray();
$dump = print_r($error, true);
wfDebug("ForeignAPIRepo: HTTP GET failed: $dump\n");
return false;
}
The default InstantCommons configuration of older MediaWikis is a bit silly. Due to T114098 I recommend one of the following, which will hopefully fix your problems:
upgrade to MediaWiki 1.27 (when it's released), or
set your LocalSettings.php to hotlink images to save on server-side requests and processing.
$wgUseInstantCommons = false;
$wgForeignFileRepos[] = array(
'class' => 'ForeignAPIRepo',
'name' => 'commonshotlink',
'apibase' => 'https://commons.wikimedia.org/w/api.php',
'hashLevels' => 2,
'url' => 'https://upload.wikimedia.org/wikipedia/commons',
'thumbUrl' => 'https://upload.wikimedia.org/wikipedia/commons/thumb',
'transformVia404' => true,
'fetchDescription' => true,
'descriptionCacheExpiry' => 43200,
'apiThumbCacheExpiry' => 24 * 3600,
);

Resources