i am a beginner in elasticsearch and hadoop. i am having a problem with moving data from hdfs into elasticsearch server using es.net.proxy.http.host with credentials. Server secured with credentials using nginx proxy configuration. But when i am trying to move data using pig script it shows null pointer exception.
My pig script is
REGISTER elasticsearch-hadoop-1.3.0.M3/dist/elasticsearch-hadoop-1.3.0.M3.jar
A = load 'date' using PigStorage() as (date:datetime);
store A into 'doc/id' using org.elasticsearch.hadoop.pig.EsStorage('es.net.proxy.http.host=ipaddress','es.net.proxy.http.port=portnumber','es.net.proxy.http.user=username','es.net.proxy.http.pass=password');
I don't understand where is the problem with my script. Can anyone please help me?
Thanks in Advance.
I faced this type of problem.elasticsearch-hadoop-1.3.0.M3.jar should not support the Proxy and authentication setting.You will try to elasticsearch-hadoop-1.3.0.BUILDSHAPSHOT.jar file. But I couldn't move object data like Totuble to Production server with authentication
Thank U
Related
I'm trying to obtain access to nifi flow project, through nipyapi and LDAP.
I have nify flow and registry up and running, and login/password('login'/'password')
import nipyapi
nipyapi.config.nifi_config.host = 'https://nifiexample.com/nifi'
nipyapi.config.registry_config.host = 'https://nifiexample.com/nifi-registry'
print(nipyapi.canvas.get_root_pg_id())
I read docs and found this method
nipyapi.security.set_service_ssl_context(service='nifi', ca_file=None, client_cert_file=None, client_key_file=None, client_key_password=None)
but as far as I'm not a developer I don't understand how to use it properly.
Can please someone tell me, what else configs/proprieties should I add to run this simple script?
I would recommend using the Secured Connection Demo from the docs. The Python code goes through this process step-by-step.
Understanding how NiFi uses TLS and performs authentication and authorization will also help these steps make sense.
Is it possible to set the default.password dynamically e.g. from a file? We have connected Presto to Zeppelin with a JDBC connector successfully, however we are using a different authentication method that requires us to renew the password every day. I have checked the current gitHub repository and found out that there is an interpreter.json that takes in default.password from the interpreter settings on Zeppelin. If I change the default.password to an environment variable, will it affect other JDBC interpreters. Is there a workaround?
Links to the repository:
https://github.com/apache/zeppelin/blob/e63ba8e897a522c6cad099286110c2eaa1496912/jdbc/src/main/resources/interpreter-setting.json
https://github.com/apache/zeppelin/blob/8f45fefb1c45ab163bedb94e3d9a9ef8a35afd91/jdbc/src/main/java/org/apache/zeppelin/jdbc/JDBCInterpreter.java
I figured out the problem. The interpreter.json in the config file stores all the information of each JDBC connection. So, by updating the password with jq command and restarting Zeppelin every day, this will update the password dynamically.
I have created a Spark API Token and am trying to register spark token using this command
spark register token-value
but I am getting an error
Cannot open source file register.ada
Does anyone have an idea on what might be causing this error on ubuntu O.S?
You need to buy Spark to get the api token. When you have it, replace token-value with your api key.
This solved it for me which was required removal of Adacore Spark https://laracasts.com/discuss/channels/spark/cant-find-registerada-when-tried-to-register-spark-api-token
I am new to Mule and I have been struggling with a simple issue for a while now. I am trying to connect to flat files (.MDB, .DBF) located on a remote desktop through my Mule application using the generic database connector of Mule. I have tried different things here:
I am using StelsDBF and StelsMDB drivers for the JDBC connectivity. I tried connecting directly using jdbc URL - jdbc:jstels:mdb:host/path
I have also tried to access through FTP by using FileZilla server on remote desktop and using jdbc URL in my app - jdbc:jstels:dbf:ftp://user:password#host:21/path
None of these seem to be working as I am always getting Connection exceptions. If anyone has tried this before, what is the best way to go about it? Connecting a remote flat file with Mule? Your response on this will be greatly appreciated!
If you want to load the contents of the file inside a Mule flow you should use the file or FTP connector, i don't know for sure about your JDBC option.
With the File connector you can access local files (files on the server where mule is running), you could try to mount the folders as a share.
Or run an FTP server like you already tried, that should work.
There is probably an error in your syntax / connection.
Please paste the complete XML of your Mule flow so we can see what you are trying to do.
Your usecase is still not really clear to me, are you really planning to use http to trigger the DB everytime? Anyway did you try putting the file on a local path and use that path in your database url. Here is someone that says he had it working, he created a separate bean.
http://forums.mulesoft.com/questions/6422/setting_property_dynamically_on_jdbcdatasource.html
I think a local path is maybe possible and it's better to test that first.
Also take note of how to refer to a file path, look at the examples for the file connector: https://docs.mulesoft.com/mule-user-guide/v/3.7/file-transport-reference#namespace-and-syntax
If you manage to get it working and you can use the path directly in the JDBC url, you should have a look at the poll scope.
https://docs.mulesoft.com/mule-user-guide/v/3.7/poll-reference
You can use your DB connector as an inbound endpoint when wrapped in a poll scope.
I experienced the same issue when connect to Microsoft Access Database (*.mdb, *.accdb) using Mule Database Connector. After further investigation, it's solved by installing Microsoft Access Database Engine
Another issue, I couldn't pass parameter to construct a query as same as I do for other databases. e.g.: SELECT * FROM emplcopy WHERE id = #[payload.id]
To solve this issue:
I changed the Query type from Parameterized into Dynamic.
I generated the query inside Set Payload transformer (generate the query in form of String, e.g.: SELECT * FROM emplcopy WHERE id = '1').
Finally, put it into the Dynamic query area: #[payload]
I am new to Cassandra and was trying to achieve some simple
operations like inserting data into cassandra. I am using cassandra gem
to achieve this.
client = Cassandra.new('tags_logs', 'ec2-xxx-xxx-xxx.com:9160')
client.disable_node_auto_discovery!
client.get('tag_data','red')
And I get the following error:
ThriftClient::NoServersAvailable - No live servers in ...
I'm running this code from my local machine. And while I've no problem connecting using cassandra-cli (so it is not a firewall issue), the code refuses to work. It works perfectly when accessing Cassandra on my own local machine.
Any ideas?
Thanks,
Eden.
I recommend you to use this gem I'm developing: https://github.com/hsgubert/cassandra_migrations
It gives access to Cassandra through CQL3 and manages schema with migrations.
Note: it requires Rails.
For future generations: simply change the timeout ...
client = Cassandra.new('tags_logs', 'ec2-example-example-example.com:9160',:connect_timeout => 10000)