Run time issues with Ballerina Integrator - ftp

I am trying to run the sample File Integration with FTP which is given by Ballerina Integrator.
While running the service i am facing same issue each and every time.
I have installed Ballerina Integrator only. I have done uninstall and installation freshly after that also Same issue.
Please help me.

I could successfully run the sample with following configurations. (sample data are given). Here I have used a Secured FTP server to do the configuration.
listener ftp:Listener dataFileListener = new({
protocol: ftp:SFTP,
host: "18.156.78.137",
port: 22,
secureSocket: {
basicAuth: {
username: "cloudloc",
password: "fsf#$#213"
}
},
path: "/clouddir/"
});
ftp:ClientEndpointConfig ftpConfig = {
protocol: ftp:SFTP,
host: "18.156.78.137",
port: 22,
secureSocket: {
basicAuth: {
username: "cloudloc",
password: "fsf#$#213"
}
}
};
Make sure you set the path parameter correctly in the dataFileListener. Without this parameter I could reproduce your attached error.
Once this is correctly configured you would get a log printed like follows.
2020-01-24 15:13:23,758 INFO [wso2/ftp] - Listening to remote server at 18.156.78.137...
2020-01-24 15:13:24,333 INFO [wso2/file_integration_using_ftp] - Added file path: /clouddir/a1.txt
2020-01-24 15:13:24,415 INFO [wso2/file_integration_using_ftp] - Added file: /clouddir/a1.txt - 12

Just install Ballerina Integrator alone which is packed with Ballerina 1.0.2 so no need to install Ballerina again or separately. From VSCode why output is not coming means,VSCode's market place all are upgraded with latest version.
Locally installed "BI with Ballerina" is lower version, In VSCode "BI with Ballerina" is latest one. Mismatched version is the main problem which i was faced.

Related

Spring Boot app in Docker container not starting in Cloud Run after building successfully - cannot access jarfile

I've set up continuous deployment to Cloud Run from GitHub for my Spring Boot project, and while it's successfully building in Cloud Build, when I go over to Cloud Run, I get the following error under Creating Revision:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
When I go over to the Logs, I see the following errors:
2022-09-23 09:42:47.881 BST
Error: Unable to access jarfile /app/target/educity-manager-0.0.1-SNAPSHOT.jar
{
insertId: "632d7187000d739d29eb84ad"
labels: {5}
logName: "projects/educity-manager/logs/run.googleapis.com%2Fstderr"
receiveTimestamp: "2022-09-23T08:42:47.883252595Z"
resource: {2}
textPayload: "Error: Unable to access jarfile /app/target/educity-manager-0.0.1-SNAPSHOT.jar"
timestamp: "2022-09-23T08:42:47.881565Z"
}
2022-09-23 09:43:48.800 BST
run.googleapis.com
…ager/revisions/educity-manager-00011-fod
Ready condition status changed to False for Revision educity-manager-00011-fod with message: Deploying Revision.
{
insertId: "w6ptr6d20ve"
logName: "projects/educity-manager/logs/cloudaudit.googleapis.com%2Fsystem_event"
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
resourceName: "namespaces/educity-manager/revisions/educity-manager-00011-fod"
response: {6}
serviceName: "run.googleapis.com"
status: {2}}
receiveTimestamp: "2022-09-23T08:43:49.631015104Z"
resource: {2}
severity: "ERROR"
timestamp: "2022-09-23T08:43:48.800371Z"
}
Dockerfile is as follows (and looking at the build log all of the commands in it completed successfully):
FROM openjdk:17-jdk-alpine
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
COPY . /app
ENTRYPOINT [ "java","-jar","/app/target/educity-manager-0.0.1-SNAPSHOT.jar" ]
I've read that Cloud Run defaults to exposing Port 8080, but just to be on the safe side I've put server.port=${PORT:8080} in my application.properties file (but it seems to make no difference one way or the other).
I have run into similar issues in the past. Usually, I am able to resolve this issue by:
specifying the port in the application itself (as you indicated in your post), and
exposing the required port in my dockerfile eg. EXPOSE 8080
Oh my good god I have done it. After two full days of digging, I realised that because I was doing it through github, my .gitignore file was excluding the /target folder containing the jar file, so Cloud Build never got the jar file mentioned in the Dockerfile.
I am going to have a cry and then go to the pub.

Caddy not working in api-platfrom 2.6.4 distribution - panic: proto: file "pb.proto" is already registered

When I try us api-platform version 2.6.4 I am not able to run it when i build adn strat containers and check logs caddy is not working i get an error like this. Any idea? Caddy version is 2.3.0
caddy_1 | panic: proto: file "pb.proto" is already registered
caddy_1 | See https://developers.google.com/protocol-buffers/docs/reference/go/faq#namespace-conflict
tureality_caddy_1 exited with code 2
Other people have reported having this bug and I had it too.
Fortunately, the bug as just been fixed by Dunglas itself. :)
https://github.com/api-platform/api-platform/issues/1881#issuecomment-822663193
The repair was done at the mercure level and not in the api platform source code itself so you can keep your current version.
You just have to docker-compose up and it will work.

Can't hit spring-cloud-dataflow HTTP(source) application

I have been following a tutorial to create a stream with spring-cloud-dataflow. It creates the following stream -
http --port=7171 | transform --expression=payload.toUpperCase() | file --directory=c:/dataflow-output
All three applications start up fine. I am using rabbitMQ and if I log in to the rabbit UI I can see that two queues get created for the stream. The tutorial said that I should be able to POST a message to http://localhost:7171 using postman. When I do this nothing happens. I do not get a response, I do not see anything in the queues, and no file is created. In my dataflow logs I can see this being listed.
local: [{"targets":["skipper-server:20060","skipper-server:20052","skipper-server:7171"],"labels":{"job":"scdf"}}]
The tutorial was using an older version of dataflow that I do not believe made use of skipper. Since I am using skipper, does that change the url? I tried http://skipper-server:7171 and http://localhost:7171 but neither of these seem to be reaching the endpoint. I did turn off SSL cert verification in the postman settings.
Sorry for asking so many dataflow questions this week. Thanks in advance.
I found that the port I was trying to hit (7171) which was on my skipper server was not exposed. I had to add and expose the port on the skipper server configuration in my .yml file. I found this post which clued me in.
How to send HTTP requests to my server running in a docker container?
skipper-server:
image: springcloud/spring-cloud-skipper-server:2.1.2.RELEASE
container_name: skipper
expose:
- "7171"
ports:
- "7577:7577"
- "9000-9010:9000-9010"
- "20000-20105:20000-20105"
- "7171:7171"
environment:
- SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_LOCAL_ACCOUNTS_DEFAULT_PORTRANGE_LOW=20000
- SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_LOCAL_ACCOUNTS_DEFAULT_PORTRANGE_HIGH=20100
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:1111/dataflow
- SPRING_DATASOURCE_USERNAME=xxxxx
- SPRING_DATASOURCE_PASSWORD=xxxxx
- SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.mariadb.jdbc.Driver
- SPRING_RABBITMQ_HOST=127.0.0.1
- SPRING_RABBITMQ_PORT=xxxx
- SPRING_RABBITMQ_USERNAME=xxxxx
- SPRING_RABBITMQ_PASSWORD=xxxxx
entrypoint: "./wait-for-it.sh mysql:1111-- java -Djava.security.egd=file:/dev/./urandom -jar /spring-cloud-skipper-server.jar"

Any pointers on setting up Chef Push Jobs Client on macOS 10.13?

I am trying to set up chef-push-jobs client on a macOS 10.13 node.
Here is what I did so far:
Installed push-jobs-client for macOS from Chef Downloads
Created a configuration file named push-jobs-client.rb that looks something like this:
chef_server_url 'https://chef.XXXXX.com/organizations/XXXXX'
node_name 'default-macos-1013'
client_key '/opt/chef/embedded/ssl/cert.pem'
trusted_certs_dir '/opt/chef/embedded/lib/ruby/gems/2.4.0/gems/chef-13.8.5/spec/data/trusted_certs'
verify_api_cert true
ssl_verify_mode :verify_peer
allow_unencrypted true
log_level :info
log_location STDOUT
whitelist({"chef-client"=>"chef-client"})
Mixlib::Log::Formatter.show_time = false
Ran this command:
/usr/local/bin/pushy-client -c push-jobs-client.rb
Error Message:
/opt/push-jobs-client/embedded/lib/ruby/gems/2.4.0/gems/opscode-pushy-client-2.4.8/lib/pushy_client.rb:236:in
`rescue in get_config': Could not download push jobs config
(RuntimeError)
Log:
INFO: [jenkins03] Setting reconfigure deadline to 2018-05-04 12:05:31
+0200
INFO: [jenkins03] using config file path: '/opt/push-jobs-client/push-jobs-client.rb'
INFO: [jenkins03] Using node name: jenkins03
INFO: [jenkins03] Using org name: XXXXX
INFO: [jenkins03] Using Chef server: https://chef.XXXXX.com/organizations/XXXXX
INFO: [jenkins03] Using private key: /opt/chef/embedded/ssl/cert.pem
INFO: [jenkins03] Incarnation ID: 633f168d-c8c0-469e-a9c0-8d6658b3b3d5
INFO: [jenkins03] Allowing fallback to unencrypted connection: true
INFO: [jenkins03] Starting client ...
INFO: [jenkins03] Retrieving configuration from https://chef.XXXXX.com/organizations/XXXXX//pushy/config/jenkins03: ...
INFO: Could not download push jobs config
So it looks like the connection and the authentication is successful but for some reason push-jobs-client cannot retrieve the configuration from the server.
I tried to go the URL from the log manually in a browser directly on the node and I see this in the browser window:
{"error":["missing required authentication header(s) 'X-Ops-UserId', 'X-Ops-Timestamp', 'X-Ops-Sign', 'X-Ops-Content-Hash'"]}
So I wonder if I did anything wrong in the configuration? Or maybe it is a bug in the push-jobs-client for macOS?
I found out that the client_key at /opt/chef/embedded/ssl/cert.pem was not a cient_key signed by the chef server. The correct client key was found at /etc/chef/client.pem
Now everything works, but I will leave the answer here in case anyone else has a similar issue.

Connecting to Mongod via Ruby driver using SSL returns Mongo::ConnectionFailure

I want to use SSL with MongoDB. It's not enabled by default so one has to compile from source with the necessary options. I followed the official documentation and got the v2.6.4 binary built and running nicely on a freshly deployed server running Ubuntu 14.04. All good so far.
Next I set up mongod as described in the official docs. I did follow their example of using a self-certified key for testing purposes. And the relevant part of the config looks like:
...
net:
bindIp: 127.0.0.1
port: 27017
ssl:
mode: requireSSL
PEMKeyFile: /opt/mongo/security/mongodb.pem
...
If I then run the client and specify to use SSL I connect fine. ($ mongo --ssl). FWIW if I try without the --ssl argument then it doesn't connect.
Ok, time to link up via Ruby. I'm on the same server and I try the following ruby script:
require 'rubygems'
require 'mongo'
client = Mongo::MongoClient.new('localhost', 27017, {:ssl => true})
Nope. It's not having it:
/home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:422:in `connect': Failed to connect to a master node at localhost:27017 (Mongo::ConnectionFailure)
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:661:in `setup'
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:177:in `initialize'
from test_mongo_ssl.rb:8:in `new'
from test_mongo_ssl.rb:8:in `<main>'
So best to make sure that there's nothing wrong with the default connection without SSL. I disable SSL on mongod and restart. Then try the ruby script again, this time without the ssl option:
...
client = Mongo::MongoClient.new('localhost', 27017)
And it's fine. Therefore I feel I've narrowed it down to the ruby driver & ssl, but beyond that there's little else to go on.
EDIT I tried their Python driver on the same server and used their example program:
from pymongo import MongoClient
c = MongoClient(host="localhost", port=27017, ssl=True)
And that did connect OK. So at least I can feel fairly confident that the mongod is configured properly and the issue lies somewhere within the Mongo Ruby driver. Quite possibly a bug in their current driver (v1.11.1).
UPDATE I've also had success connecting via ssl using the node.js driver:
var mongo = require('mongodb');
var database = new mongo.Db("my_database", new mongo.Server("127.0.0.1", 27017, {ssl:true} ), {w:0});
database.open(function(err, db) {
if(err) throw err;
db.authenticate('user', 'password', function(err, result) {
var collection = db.collection('foo');
collection.findOne(function(err, item) {
if(err) throw err;
console.log(item);
db.close();
});
});
});
There it seems to be increasingly likely that there's either a bug in the ruby driver, or the documentation is incomplete and not explaining accurately how to use SSL connections. Therefore I've opened a new issue on MongoDB's issue tracker to hopefully get to the bottom of this.
Rather embarrassingly, the solution to this issue was my /etc/hosts file had a typo for the localhost entry:
127.0.0.1 localhost.localdomain locahost
As you can see, it's missing the second letter L in "localhost". (I suspect it went missing during an accidental vim gesture.) Therefore to resolve I just had to reinstate the missing "l":
127.0.0.1 localhost.localdomain localhost
It's still a mystery as to why the Python sample worked correctly. And it's because of that I didn't twig earlier that it was a problem with the hosts file.

Resources