Error loading Golang and Lua plugins in Kong - go

I'm trying to install a new instance of Kong, but I'm getting the following error trying to start the service:
stat /tmp/go-plugins/lua_plugin.so: no such file or directory
The installation is supposed to install a plugin built using Golang and a plugin that is still written in Lua. If I remove the Lua plugin, the service starts up fine. The part that's confusing me is why is Kong assuming that both plugins are written in Golang? Other installations have worked fine, so it's very confusing why it's doing it now.

The issue was the directory I was mounting in containing the lua plugins was not being mounted into the container properly. Specifically, the host path for the volume definitions, so fixing that resolved the issue.

Related

Elixir Phoenix and Symlinks on Windows SMB Drive

So I have an interesting issue that I just can't figure out why I'm getting this and what to do.
So basically I store all my development projects on my Synology NAS for local access between my various devices. There has never been a problem with this until I started playing around with Elixir and more importantly Phoenix. The issue I am getting is when running mix phx.server. I get the following
[warn] Phoenix is unable to create symlinks. Phoenix' code reloader will run considerably faster if symlinks are allowed. On Windows, the lack of symlinks may even cause empty assets to be served. Luckily, you can address this issue by starting your Windows terminal at least once with "Run as Administrator" and then running your Phoenix application.
[info] Running DiscussWeb.Endpoint with cowboy 2.7.0 at 0.0.0.0:4000 (http)
[error] Could not start node watcher because script "z:/elHP/assets/node_modules/webpack/bin/webpack.js" does not exist. Your Phoenix application is still running, however assets won't be compiled. You may fix this by running "cd assets && npm install".
[info] Access DiscussWeb.Endpoint at http://localhost:4000
So I tried as it stated and ran it in CMD as admin but to no avail. After some further inspection I tried to create the symlinks manually but every time I tried I would get a Access is denied. error (yes this is elevated CMD).
c:\> mklink "z:\elHP\deps\phoenix" "z:\elHP\assets\node_modules\phoenix"
Access is denied.
So I believe it is something to do with the fact that the symlinks are trying to be created on the NAS because if I move the project and host it locally it will work. Now I know what you're thinking. Yes, I could just store them locally on my PC but I like to have them available between PCs without having to transfer files or rely on git etc. (i.e. offline access), not to mention that the NAS has a full backup routine.
What I have tried:
Setting guest read write access on the SMB share
Adding to /etc/samba/smb.conf on my Synology NAS:
[global]
unix extensions = no
[share]
follow symlinks = yes
wide links = yes
Extra logging on SMB to see what is happening when I try it (nothing extra logged)
Creating a symbolic link from my MAC (works)
Setting all of fsutil behavior query SymlinkEvaluation to enabled
At the moment I am stuck and unsure of what to try next, or even if it is possible. Considering just using NFS instead but will I face the same issues with SMB?
P.S I faced a similar issue with Python venvs a while ago, just a straight-up Access is denied. error and just gave up and moved just the venv locally and kept the bulk of the code on the NAS. (This actually ended up beingthe best solution for that because the environments of each device on my network clashed etc.)
Any ideas are greatly appreciated.

"No filesystem found for scheme gs" when running dataflow in google cloud platform

I am running my google dataflow job in Google Cloud Platform(GCP).
When I run this job locally it worked well, but when running it on GCP, I got this error
"java.lang.IllegalArgumentException: No filesystem found for scheme gs".
I have access to that google cloud URI, I can upload my jar file to that URI and I can see some temporary file for my local job.
My Job id in GCP:
2019-08-08_21_47_27-162804342585245230 (beam version:2.12.0)
2019-08-09_16_41_15-11728697820819900062 (beam version:2.14.0)
I have tried beam version of 2.12.0 and 2.14.0, both of them have the same error.
java.lang.IllegalArgumentException: No filesystem found for scheme gs
at org.apache.beam.sdk.io.FileSystems.getFileSystemInternal(FileSystems.java:456)
at org.apache.beam.sdk.io.FileSystems.matchNewResource(FileSystems.java:526)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryHelpers.resolveTempLocation(BigQueryHelpers.java:689)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.extractFiles(BigQuerySourceBase.java:125)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.split(BigQuerySourceBase.java:148)
at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.splitAndValidate(WorkerCustomSources.java:284)
at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitTyped(WorkerCustomSources.java:206)
at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitWithApiLimit(WorkerCustomSources.java:190)
at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplit(WorkerCustomSources.java:169)
at org.apache.beam.runners.dataflow.worker.WorkerCustomSourceOperationExecutor.execute(WorkerCustomSourceOperationExecutor.java:78)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:412)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:381)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:306)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:135)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:115)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:102)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
This may be caused by a couple of issues if you build a "fat jar" that bundles all of your dependencies.
You must include the dependency org.apache.beam:google-cloud-platform-core to have the Beam GCS filesystem.
Inside your far jar, you must preserve the META-INF/services/org.apache.beam.sdk.io.FileSystemRegistrar file with a line org.apache.beam.sdk.extensions.gcp.storage.GcsFileSystemRegistrar. You can find this file in the jar from step 1. You will probably have many files with the same name in your dependencies, registering different Beam filesystems. You need to configure maven or gradle to combine these as part of your build or they will overwrite each other and not work properly.
There is also one more reason for this exception.
Make sure you create pipeline (e.g. Pipeline.create(options)) before you try to access files.
[GOLANG] In my case it was solved by applying the below imports for side-effects
import (
_ "github.com/apache/beam/sdks/go/pkg/beam/io/filesystem/gcs"
_ "github.com/apache/beam/sdks/go/pkg/beam/io/filesystem/local"
_ "github.com/apache/beam/sdks/go/pkg/beam/io/filesystem/memfs"
)
It's normal. On your computer, you are using internal file with your tests (/.... In Linux, c:... In Windows). However, Google cloud storage isn't a an internal file system (btw it's not a file system) and thus the "gs://" can't be interpreted.
Try TextIO.read.from(...).
You can use it for internal and external files like GCS .
However, I experienced an issue, months ago on Windows environment, when I developed in Windows. C: wasn't a known scheme (same error as yours).
It's possible that works now (I'm no longer on Windows, I can't test). Else, you have this workaround pattern: set a variable in your config object and perform a test on it like:
If (environment config variable is local)
p.apply(FileSystems.getFileSystemInternal...);
Else
p.apply(TextIO.read.from(...));

Google Cloud Functions and shared libraries

I'm trying to use wkhtmltopdf on GCF for PDF generation.
When my function tries to spawn the child process I get the following error:
Error: ./services/wkhtmltopdf: error while loading shared libraries: libXrender.so.1: cannot open shared object file: No such file or director
The problem is clearly due to the fact that wkhtmltopdf binary depends on external shared libraries which are not installed in GCF environment.
Is there a way to solve this issue or should I give up and use other solutions (AWS Lambda o GAE)?
Thank you in advance
Indeed, I’ve found a way to solve this issue by copying all required libraries in the same folder (/bin for me) containing wkhtmltopdf binary. In order to let the binary file use uploaded libraries I added the following lines to wkhtmltopdf.js:
wkhtmltopdf.command = 'LD_LIBRARY_PATH='+path.resolve(__dirname, 'bin')+' ./bin/wkhtmltopdf';
wkhtmltopdf.shell = '/bin/bash';
module.exports = wkhtmltopdf;
Everything worked fine for a while. At a sudden I receive many connection errors from GCF or timeouts but I think it’s not related to my implementation but rather to Google.
I’ve ended up setting a dedicated server.
I have managed to get it working, there are 2 things needed to be done, as wkhtmltopdf won't work if:
libXrender.so.1 can't be loaded
you are using stdout to collect resulting pdf. Wkhtmltopdf has to write the result into a file
First you need to obtain correct version of libXrender.
I have found out, which docker image Cloud functions are using as base for nodejs functions. I've ran it locally, installed libxrender and copied the library into my function's directory.
docker run -it --rm=true -v /tmp/d:/tmp/d gcr.io/google-appengine/nodejs bash
Then, inside the runing container:
apt update
apt install libxrender1
cp /usr/lib/x86_64-linux-gnu/libXrender.so.1 /tmp/d
I have put this into my function's project directory and under lib sub directory. In my function's source file, I then set-up LD_LIBRARY_PATH to include the /user_code/lib directory (/user_code is the directory, where at last your function will end up being put by google):
process.env['LD_LIBRARY_PATH'] = '/user_code/lib'
This is enough for wkhtmltopdf to be able to execute. It will fail, as it won't be able to write to stdout and the function will eventually timeout and be killed (as Matteo experienced). I think this is because google runs the containers without a tty (just speculation), I can run my code in their container, if I run it with docker run -it flags. To solve this, I am invoking wkhtmltopdf so that it writes the output into a file under /tmp (this is in-memory tmpfs). I then read the file back and send it as my response body. Note that the tmpfs might be reused between function calls, so you need to use unique file every time.
This seems to do the trick and I am able to run wkhtmltopdf as Google CloudFunction.

ELF Header Error and Building Modules In Apache for Jelastic

So I'm building a web app and I decided to move it from my localhost to Jelastic. The app requires one custom module: mod_auth_cas. I followed the instructions on the Jelastic website for adding a module.
The only step I didn't follow was compiling the module against 2.2.15. I tried configuring that version, but I couldn't figure out how to run it concurrently with the 2.2.24 version my Mac runs natively. I figured that a module that worked with 2.2.24 should work with 2.2.15.
I uploaded the .so file to the Jelastic server and added the following LoadModule command to the httpd.conf file:
LoadModule auth_cas_module /usr/lib64/php/modules/mod_auth_cas.so
and restarted Apache. I got the following error:
Failed to start
Stopping httpd [ OK ] Starting httpd
Jelastic autoconfiguration mark httpd
Syntax error on line 161 of /etc/httpd/conf/httpd.conf
Cannot load /usr/lib64/php/modules/mod_auth_cas.so into server
/usr/lib64/php/modules/mod_auth_cas.so invalid ELF header [FAILED]
From the research I did, it seemed as though this error comes when "the installation is
'corrupted' or someone installed something for the wrong processor/binary type."
So I'm trying to figure out what to do. I either need to figure out how to install Apache 2.2.15 and compile a module against that, or I need to figure out what I'm doing wrong on the Jelastic side, or I need to figure out why the .so file is getting corrupted. Which one is it, and how do I do it?
Indeed the problem is the different platforms.
The module was compiled for the correct platform and installed for you.
FYI: To use this module we created a cas.conf file in conf.d please open this file and modify accordingly.
I recommend that you contact your hosting provider and ask them to compile that module for you. The problem is most likely caused by that (compilation on a different system / system that is too dissimilar), or else it's at least the first thing to rule out.

Using NGINX server to deploy a Meteor App from Amazon Linux AMI 2013.09.2 instance receive Module Error

I am attempting to deploy my first web application (a version of Telescope from the MeteorJS framework) via Heroku to a custom subdomain from a Amazon Linux AMI 2013.09.2 instance. I am following along with this tutorial - http://satishgandham.com/2013/12/a-complete-guide-to-install-production-ready-telescope-on-your-own-server/ - but once I attempt to run Telescope using PORT=3000 MONGO_URL=mongodb://localhost:3000/Telescope ROOT_URL=http://ec2-54-193-42-229.us-west-1.compute.amazonaws.com node client/main.js, I receive this error message: Error: Cannot find module '/home/ec2-user/bundle/programs/server/node_modules/fibers/client/main.js'
What I have attempted to do to solve this problem is performed cp || mv on the file main.js which is originally located in the ~/Telescope/client directory over to /home/ec2-user/bundle/programs/server directory and even '/home/ec2-user/bundle/programs/server/node_modules/fibers but I cannot seem to separate main.js from the /client directory. I am not sure if that is the issue or if there is some other underlying problem but I want to find a work around to using a proxy server at this point. I thought that moving the main.js file out of the /client directory was sufficient but apparently not. I am not sure it is imperative for my purposes to continue attempting to use a proxy but if there is a fix, I would not mind learning about it.
Or if any one could direct me on how this - https://github.com/aldeed/deploymeteor/ - could be a potential work-around to using an NGINX server proxy your help would be much appreciated.
You are getting the error because you are not running the command from your home folder.
You were at bundle/programs/server/node_modules/fibers.
Either use absolute path for client/main.js, or cd to ~
MONGO_URL=mongodb://localhost:3000/Telescope ROOT_URL=http://ec2-54-193-42-229.us-west-1.compute.amazonaws.com node client/main.js
PS: It will be helpful for others if you asked the question on the post itself, instead of here

Resources