When a Camel route starts, the status of the route changes. However it happens that the route starts correctly, but there are log warning/errors during start/runtime (for example incorrect password when starting a FTP component).
These events are logged into the console/log file. I want to get these events programmatically (outside the Camel DSL). For example getEvents(routeID, typeEvent, xNumberOfEvents){}.
Are these events cached somewhere by Camel? Can I retrieve the events by something like the ManagedRouteMBean? Or should I write my own caching mechanism using the event notifier (similar to the RiderEventNotifier example) or some kind of errorhandler?
For example to following message is written to the log:
2018-10-11 22:15:24.719 WARN 3820 --- [ XNIO-2 task-12]
o.a.c.component.file.remote.FtpConsumer : Error auto creating directory:
due File operation failed: 530 This server does not allow plain FTP. You have
to use FTP over TLS.
. Code: 530. This exception is ignored.
org.apache.camel.component.file.GenericFileOperationFailedException: File
operation failed: 530 This server does not allow plain FTP. You have to use
FTP over TLS.
. Code: 530
The issue with the above is that the route start up normal. So
ManagedRouteMBean route = context.getManagedRoute(id, ManagedRouteMBean.class);
RouteError lastError = route.getLastError();
returns no error.
Also this seems not errors in the exchange, so the errorhandler or event notifiers don't intercept these messages.
I would like to intercept (and cache) such messages by routeid.
There is a way to handle these kinds of warnings/errors by the normal errorhandler by use of a so-call bridgeErrorHandler. See: http://camel.apache.org/why-does-my-file-consumer-not-pick-up-the-file-and-how-do-i-let-the-file-consumer-use-the-camel-error-handler.html
Related
I'm trying to route my grpc requests to a proxy instead of the actual server for some tests. I've set the JVM properties like: -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8888 (which is where my proxy server runs - this is actually a Hoverfly proxy server). When I try to send the request, here's the error I see:
2021-06-22 16:23:52.935 - - WARN i.g.n.s.i.n.c.DefaultChannelPipeline grpc-default-worker-ELG-1-16 - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.grpc.netty.shaded.io.netty.handler.proxy.ProxyConnectException: http, none, host.docker.internal/127.0.0.1:8888 => usvc.dev.hostname.com:443, disconnected
at io.grpc.netty.shaded.io.netty.handler.proxy.ProxyHandler.channelInactive(ProxyHandler.java:236)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
at io.grpc.netty.shaded.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelInactive(CombinedChannelDuplexHandler.java:420)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:390)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:355)
at io.grpc.netty.shaded.io.netty.handler.codec.http.HttpClientCodec$Decoder.channelInactive(HttpClientCodec.java:282)
at io.grpc.netty.shaded.io.netty.channel.CombinedChannelDuplexHandler.channelInactive(CombinedChannelDuplexHandler.java:223)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:390)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:355)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.channelInactive(Http2ConnectionHandler.java:427)
at io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler.channelInactive(NettyClientHandler.java:432)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1429)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:947)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:826)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:474)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I don't fully understand this error message - does this mean that it could not forward the request to the proxy?
the gmail smtp-relay works fine using the sync driver, but if we queue the email we this error. cleared config, cache, & restarted queue workers. tested in prod and dev, same results
[2021-01-24 20:04:22] production.ERROR: Expected response code 250 but got an empty response {"exception":"[object] (Swift_TransportException(code: 0): Expected response code 250 but got an empty response at /home/****/****/vendor/swiftmailer/swiftmailer/lib/classes/Swift/Transport/AbstractSmtpTransport.php:448)
were wondering is this because of serialization and something is not making it through that process???
using latest stable release of laravel >8.0. gmail smtp is authenticating just fine, per why the sync driver sends emails easily. maybe there needs to be a timeout on the queue jobs so they dont barrage gmail so quickly? also our code works fine using sendgrid for example as the smtp relay. thanks.
See https://laracasts.com/discuss/channels/laravel/laravel-swift-mailer-exception-expected-response-code-250-but-got-an-empty-response-using-gmail-smtp-relay-database-queue-driver
Update your AppServiceProvider.php
add this inside boot();
// Fix for SwiftMailer Service;
$_SERVER["SERVER_NAME"] = "your.domain.name";
For users of smtp-relay.gmail.com, if you use localhost/127.0.0.1 as domain during developments, you probably need to change the domain name to use in EHLO command to begin the transaction. I solved this by adding &local_domain=dev.mydomain.tld at the and of my DSN.
smtp://smtp-relay.gmail.com:587?encryption=tls&local_domain=dev.mydomain.tld&...
For SwiftMailer Symfony bundle (since 2.4.0), you can set the local_domain config parameter:
// config/packages/dev/swiftmailer.yaml
swiftmailer:
...
local_domain: dev.mydomain.tld
Explanation for the 2 previous Answers
if $_SERVER["SERVER_NAME"] is the solution:
When you are using cron
The reason is that $_SERVER["SERVER_NAME"] is null when cron is executed. $_SERVER["SERVER_NAME"] is usually only defined for http access.
Example implementation (laravel):
if (!isset($_SERVER['SERVER_NAME'])) {
$url = config('env.APP_URL');
$domain = mb_ereg_replace("http(s)? ://", "", $url);
$domainParts = explode('/', $domain);
ini_set('server_name', count($domainParts) > 0 ? $domainParts[0] : $domain)
}
References :
Cron Job $_SERVER issue
https://github.com/swiftmailer/swiftmailer/issues/992
if 'local_domain' is the solution
When you have a mailhost setting of MAIL_HOST=smtp-relay.gmail.com in your laravel project
The reason is that if local_domain' is not set, the protocol for mail communication with Gmail will be EHLO [127.0.0.1]` and the communication will be disconnected.
By the way, I used gmail->gmail alias and did not need relay in the first place, so I solved the problem by setting MAIL_HOST=smtp.gmail.com.
References:
https://blog.trippyboy.com/2021/laravel/laravel-expected-response-code-250-but-got-an-empty-response/
I had to deal with both of them because of cron messaging and MAIL_HOST=smtp-relay.gmail.com in my environment.
I hope this information will help you.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sam/Documents/freenet/nifi-automation/src/compose.py", line 122, in <module>
compose_services(env_config, types, NIFI_VERSION, False, bench)
File "/home/sam/Documents/freenet/nifi-automation/src/compose.py", line 11, in compose_services
pg = ProcessorGroups(NIFI_VERSION)
File "/home/sam/Documents/freenet/nifi-automation/src/components/processor_group.py", line 9, in __init__
processor_groups = nipyapi.canvas.list_all_process_groups(pg_id='root')
File "/home/sam/Documents/freenet/nifi-automation/venv/lib/python3.6/site-packages/nipyapi/canvas.py", line 178, in list_all_process_groups
root_flow = recurse_flow(pg_id)
File "/home/sam/Documents/freenet/nifi-automation/venv/lib/python3.6/site-packages/nipyapi/canvas.py", line 64, in recurse_flow
return _walk_flow(get_flow(pg_id))
File "/home/sam/Documents/freenet/nifi-automation/venv/lib/python3.6/site-packages/nipyapi/canvas.py", line 85, in get_flow
raise ValueError(err.body)
ValueError: No applicable policies could be found. Contact the system administrator.
Process finished with exit code 1
I have a few NIFI automation scripts which work perfectly fine when I run on a unsecure cluster (localhost/or somehwere) but
I get this error when I run against a URL which is behind KNOX gateway.
I can see a few functions in nipyapi.access_api class
def knox_callback(**kwargs)
def knox_callback_with_http_info(self,
**kwargs):
def knox_request(self, **kwargs):
def knox_request_with_http_info(self, **kwargs):
I cant understand how any of these or in combination with anyother function from the class is/are to be used to overcome this? any idea?
EDIT1:
IM using the security.py functions first is secure_login. In the docs its written Login requires a secure connection over https. Prior to calling this method, the host must be specified and the SSLContext should be configured (if necessary).
set_service_ssl_context this function serves the purpose but I'm not sure if I need it or not as for one way TSL it isn't required.
But I have a confusion. I have two URLs one Knox URl with LDAP login other direct URI(although it also redirects to knox-Ldap sequence) When I give the Knox url I get a different error as compared to direct one.
from the direct URI i get
File /nipyapi/security.py", line 130, in service_login
username=username, password=password)
nipyapi.nifi.rest.ApiException: (409)
Reason: Conflict
HTTP response body: Username/Password login not supported by this NiFi.
And in case of Knox uri it throws the same connection error exception on the same lines but
nipyapi.nifi.rest.ApiException: (404)
Reason: Not Found
So im assuming I have to use the direct url. secondly why it says user not for. I can login manually. from the LDAP sequence.
My current request is going as an anonymous user so I'm going to use the Certs and try the set_service_ssl_context function with the PEM files.
Btw below are the two urls.
"nifi_host": "https://****.****.net:8443/nifi-api", DIRECT URL
"nifi_host": "https://****-****.****.net:8443/gateway/****-sso/nifi-api", knox url
EDIT 2:
my request is being received as anonymous at server even with the following code.
nipyapi.security.set_service_ssl_context(service='nifi', ca_file=None, client_cert_file="bi.keystore", client_key_file=None, client_key_password="infraop6043")
nipyapi.security.service_login(username='myuser', password='mypass')
it gives Connection Error here
nipyapi.nifi.AccessApi().create_access_token( username=username, password=password) And show this error HTTP response body: Username/Password login not supported by this NiFi
I'm not sure how to use the set_service_ssl_context properly. Maybe instead of bi.keystore I should try using directly the letsencrypt-root-ca: letsencrypt.org/certs/isrgrootx1.pem.txt or my local system ca-certs.
my properties file for toolkit-cli was
*baseUrl=https://svc-hadoop-utilities-pre-c3-02.jamba.net:18443
keystore=/home/jread/nifi-toolkit/bi.keystore
keystoreType=JKS
keystorePasswd=infraop6043
keyPasswd=
truststore=/usr/lib/java/jre/lib/security/cacerts
truststoreType=JKS
truststorePasswd=changeit
proxiedEntity=CN=bijobs.jamba.net*
The error message you have 'No applicable policies could be found. Contact the system administrator.' is typically produced by the Ranger plugin when security is enabled and the user you are connecting with is not permitted, I have not seen Knox produce it.
Can you confirm that you have not enabled Ranger without policies when you enable Knox for NiFi, and that is the error message you are getting?
If you look in the nifi-app.log I suspect you'll see the same error being produced, which would suggest to me that NiPyAPi is just transporting the error that NiFi is generating due to an incomplete security setup.
I work in a project which uses the session a lot. We have a db handler (the standard one from Zend) and currently i have this initialization (db handler + session start) in a plugin for the preDispatchLoop. Previously it was in preDispatch but because of it being called for each action (included those in the 'forwarded' action, it caused me problems.
My problem is that i began working in internationalization and i began using the router to detect the language in the URI: we use the form /language/controller/action). The router would like to use the session to read/store the language. But as you may know the router comes first and then the (pre/post) dispatcher stuff.
So the question would be: why not move the session initialization to the bootstrapping ? it's because it was there before but i had to move it because i need to test that the db (remember that the session uses the db) is accessible to prevent errors. And if there's an error i simply redirect (request->setController/setAction error). If i move back the session initialization code to the bootstrapping i can't make the redirect if the db is not accessible.
I've read other question and i've found many people asking to access the request object from the bootstrapping. But they all say: you can but shouldn't. but then, how would i do in this case ? my last option would be to move back the sessions initialization to the bootstrapping and if it fails, manually send headers and read the view but the error code but that's a horrible hack.
My thoughts are that the session shouldn't be used that early. They shouldn't be called in the bootstrapping as they're not yet fully aware of the controller/action requested. I think that to get the language i could simply rely in cookies (manual) and get it from there (as well as from the URI). And if eventually some day the session info must be used in the bootstrapping i would use a global variable.
What do you think ? is there an error in the way i'm controlling the application ?
Some questions seen:
Zend Framework: Getting request object in bootstrap
Best way to deal with session handling in Zend Framework
(Zend version 1.9.6, not using Application nor Bootstrap)
I would move the session initialization and db connection to the bootstrapping.
If you can't connect to your db during bootstrapp this should count as low-level error. It is not excepted to happen in production.
Simply wrap your bootstrapping process into a try catch block so you can output an error page.
// in your index.php
try {
$application = new Zend_Application(
APPLICATION_ENV,
APPLICATION_PATH . '/configs/application.ini'
);
$application->bootstrap()
->run();
} catch (Exception $e) {
header('Content-type: text/html; charset=utf-8');
header('HTTP/1.1 503 Service Unavailable');
header("Retry-After: 3600");
// Output some error page.
echo "<html><head><title>System Error</title></head><body>...</bod></html>";
?>
using Dojo, is it possible to make an Ajax call using xhrPost from an HTTP view to a HTTPS url ?
The url must be HTTPS (as defined in Struts).
If I simply set "MyCommand" as the 'url' parameter of the xhrGet, I get a 302 error code.
If I transform "MyCommand" using javascript to something like "https://......./servlet/MyCommand" I see the following error in Firebug : "uncaught exception: Permission denied to call method XMLHttpRequest.open".
I'm stuck using both approaches, the only solution I found is to remove the "https" clause in the struts configuration file, and of course this is not a correct solution :)
Thanks for any help.
Best regards,
Nils
connection to https from http involves a different port of the target. This violates the same-origin policy which should be enforced by the browser to the running javascript code.
It should work with an iframe..
dojo.io.iframe encapsulates this behaviour for you
http://docs.dojocampus.org/dojo/io/iframe
If your server responds with a redirect to a non-ssl (correctly to same-origin) page you should be able to read the response (because the iframe is now in the same-origin).