Change public folder for Ruby on Rail 5 docker application - ruby

We are running a Ruby on Rails 5 application in a Docker Container.
The container is hosted on Azure Container Apps.
We need to persist the public folder and would like to use Azure File Share for this. The share is working and accessible (test via script)
Problem is now, that with azure container apps we can't define a volume like in docker-compose in order to change the public folder directly from inside the container to outside:
volumes:
- "./path/to/public/folder:/app/public"
Usually I would pass the volume to docker run, but this is not possible with azure container apps.
I am no Ruby developer.
I tried to change the public folder for assets with the following
How to override public folder path in Rails 4?
in config/Application.rb
config.assets.paths['public'] = File.join('/cms-share', 'public')
Sadly this is leading to an exception on startup:
! Unable to load application: TypeError: no implicit conversion of String into Integer
/app/config/application.rb:32:in `[]=': no implicit conversion of String into Integer (TypeError)
Thanks in advance

You have missed the object you are trying to modify.
config.assets.paths
#=> ['app/assets/images', ..]
Those are assets paths and returned object is array and what you needed to modify is some rails path object, which return other object that can be modified.
config.paths['public']
=>
#<Rails::Paths::Path:0x0000000113acbca0
#autoload=false,
#autoload_once=false,
#current="public",
#eager_load=false,
#exclude=nil,
#glob=nil,
#load_path=false,
...
Be attentive that the second one, does not have anything about assets it is just config.paths
config.paths['public'] = File.join('/cms-share', 'public')
This code will solve your problem with changing default public path.

Related

Calls From External Web Components in PWAs [duplicate]

We are running 2 servers. Server 1 hosts a react application. Server 2 hosts a webcomponent exposed as a single javascript bundle along with some assets such as images. We are dynamically loading the webcomponent Javascript hosted on Server 2 in our react app hosted on Server 1. The fact that it is a webcomponent might or might not affect the issue.
What's happening is that the webcomponent makes uses of assets such as images that are located on Server 2. But when the react app loads the webcomponent, the images are not found as its looking for the images locally on Server 1.
We can fix this in many ways. I am looking for the simplest fix. Since Server 1 app and Server 2 apps are being developed by different teams both should be able to develop in the most natural way possible without making allowances for their app being potentially loaded by other apps.
The fixes that I could think of was:
Making use of absolute URLs to load assets - Need to know the deployed location in advance .
Adding a reverse proxy to Server 1 to redirect to Server 2 whenever a particular path is hit - Need to configure this. The React app could load hundreds of webcomponents, viz we need add a lot of reverse proxies.
Embed all assets into the single javascript on Server 2, like embed svgs into the javascript. - Too limiting. If the SVGs are huge and will make the bundle size bigger.
I was hoping to implement something like -
Since the react app knows where to hit Server 2, can't we write some clever javascript that will make the browser go to Server 2 whenever assets are requested by a Javascript loaded by Server 2.
If you download your Web Component via a classic script (<script> with default type="text/javascript") you can retrieve the URL of the loaded file by using document.currentScript.src.
If you download the file as a module script (<script> with type="module"), you can retrieve the URL by using import.meta.url.
Parse then the property to extract the base path to the Web Component.
Example of Web Component Javascript file:
( function ( path ) {
var base = path.slice( 0, path.lastIndexOf( '/' ) )
customElements.define( 'my-comp', class extends HTMLElement {
constructor() {
super()
this.attachShadow( { mode: 'open' } )
.innerHTML = `<img src="${base}/image.png">`
}
} )
} ) ( document.currentScript ? document.currentScript.src : import.meta.url )
How about uploading all required assets to a 3rd location, or maybe an AWS S3 bucket, Google Drive, Dropbox etc.? That way those assets always have a unique, known URL, and both teams can access them independently. As long as the names remain the same, so will the URLs.

Terraform Heroku provider: Does it have resource for dyno?

Using terraform, I could create Heroku applications, create and attach add-ons and put the applications in a pipeline. After the infrastructure is created, everything is good except the dynos are not started. I used heroku/nodejs buildpack. Terraform's Heroku provider does not provide any explicit resource type that corresponds to Heroku dyno. Are we supposed to manually push application for deployment on Heroku when the necessary add-ons and pipeline are created with Terraform?
I googled a lot but couldn't figure out what could be the reason for the dynos not getting started after necessary infrastructure is in place.
Please help.
so today I wanted to test heroku with terraform and got the same issue
it looks like you need to push your app to the git_url reference provided by heroku_app
made a working example at https://github.com/nroitero/terraform_heroku
I'm doing as the example below, and it works.
First, defining the heroku app:
resource "heroku_app" "this" {
name = var.HEROKU_APP_NAME
region = var.HEROKU_REGION
space = var.HEROKU_SPACE
internal_routing = var.HEROKU_INTERNAL_ROUTING
Then, indicating where the node application is:
resource "heroku_build" "this" {
app = heroku_app.this.name
#buildpacks = [var.BUILDPACK_URL]
source = {
#url = var.SOURCE_URL
#version = var.SOURCE_VERSION
#testing path instead of source
path = var.SOURCE_PATH
}
}
And to define dynos, I'm using:
resource "heroku_formation" "this" {
app = heroku_app.this.name
type = var.HEROKU_FORMATION_TYPE
quantity = var.HEROKU_FORMATION_QTY
size = var.HEROKU_FORMATION_SIZE
depends_on = [heroku_build.this]
}
For the dyno size parameter (var.HEROKU_FORMATION_SIZE), use the official dyno type "name" as listed on https://devcenter.heroku.com/articles/dyno-types.
For private spaces, names are: private-s, private-m and private-l.

Azure Ruby SDK create public blob

I'm having an issue where I'm creating a container programmatically using the Ruby SDK, and it is being set to Private immediately.
I'm using the Azure Ruby SDK found here: https://github.com/Azure/azure-sdk-for-ruby
def create_container(container_name)
container = Azure.blobs.create_container(container_name)
end
How do I set the access level to Public Blob? Right now I have to go to the portal and set it manually.
Copying the code from here :)
container = azure_blob_service.create_container("test-container", :public_access_level => "container")

Empty result from Virtual Member Manager IBM Websphere WAS 8.5

I am using IBM websphere application server, I am trying to access the default file repository from my application using the virtual member manager, below is the code I use, it works well but I got an empty result, I checked the fileRegistry.xml file and it contains users, can anyone tell my where is my problem?
DataObject root = SDOHelper.createRootDataObject();
DataObject searchCtrl = SDOHelper.createControlDataObject(root,null,SchemaConstants.DO_SEARCH_CONTROL);
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add("uid");
searchCtrl.getList(SchemaConstants.PROP_SEARCH_BASES).add("o=defaultWIMFileBasedRealm");
searchCtrl.setString(SchemaConstants.PROP_SEARCH_EXPRESSION,"#xsi:type='PersonAccount' and uid='*'");
root = getVMMService().search(root);
System.out.println("Output data graph"+ printDO(root));
Does the getVMMService() method get an instance of LocalServiceProvider ? I run the example in an standAlone app, in the example http://www.ibm.com/developerworks/websphere/zones/portal/proddoc/dw-w-userrepository/ is call it via corba protocol, when I try with Servlets for testing, I comment those lines

Handling dynamic created files in play 2

I wrote a small app that creates downloadable pdf files with play 2.0
I want to serve them to the public. On my development environment I created a folder in the
/assets/ folder and everything worked nice.
Now, when switching to production, I figured out that play always deployed those files behind my back.
Do I really have to write a own controller to serve those files or what is the way here ?
I've also had problems trying to serve files created dynamically with the assets controller. I don't know if it's for some kind of caching but I ended up writing my own controller, and now it woks perfectly.
I mean, I use the Assets controller for regular public files and for my dynamic generated files I use this one:
public class FileService extends Controller {
static String path = "/public/dynamicfiles/";
public static Result getFile(String file){
File myfile = new File (System.getenv("PWD")+path+file);
return ok(myfile);
}
}
And the routes would be like this:
GET /files/:file controllers.FileService.getFile(file: String)
GET /assets/*file controllers.Assets.at(path="/public", file)
It works great for me

Resources