I have activated default "elastic" user and set a password for that user. I am using elasticsearch-php to connect and query my elasticsearch. It was working good, but after activating the password I cannot connect using my previous code.
While looking for providing authentication information with the ClientBuilder, I only get configurations regarding ssh connection. Did not find anything how can I use my elasticsearch username and password with the connection.
$hosts = ['127.0.0.1:9200'];
$es_user = 'elastic';
$es_pass = 'MY_PASS';
$elasticsearch = Elasticsearch\ClientBuilder::create()->setHosts($hosts)->build();
I am wondering how can I use my username/password in the connection above.
You can do it with an extended host configuration like this:
$hosts = [
[
'host' => '127.0.0.1',
'port' => '9200',
'user' => 'elastic',
'pass' => 'MY_PASS'
]
];
$elasticsearch = Elasticsearch\ClientBuilder::create()
->setHosts($hosts)
->build();
Related
I have a problem. Current problem is that I want to transfer some data to a host with an username and password, but I keep getting the same error message. I would be very happy if you help.
My conf file:
input {
file {
path => "........../*.txt"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter {
.............
}
output {
elasticsearch {
hosts => "xx.xx.xxx.xxx:xxxx"
manage_template => false
index => "my_index_name"
document_type => "my_index_name"
user => "my_user_name"
password => "my_password"
}
Error message:
[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect
connection to dead ES instance, but got an error
{:url=>"http://elastic_user_name:xxxxxx#xx.xx.xxx.xxx:xxxx/",
:exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError,
:message=>"Got response code '403' contacting Elasticsearch at URL
'http://xx.xx.xxx.xxx:xxxx/'"}
I also made changes to logstash.yml and elasticsearch.yml files as follows, but I got the same error.
elasticsearch.yml: xpack.management.elasticsearch.username:
my_elastic_user_name xpack.management.elasticsearch.password:
my_password
logstash.yml: xpack.monitoring.elasticsearch.username:
my_elastic_user_name xpack.monitoring.elasticsearch.password:
my_password
Receiving a HTTP 403 response code ("Forbidden") indicates that your user does not have permissions to index data to Elasticsearch (see this answer for the difference between Unauthorized (401) and Forbidden (403)).
Your user should have the following permissions
Setting up such a role is described here.
Please refer to the documentation about security privileges in order to adapt it to your use case.
For xpack monitoring you should create a user with the logstash_system built-in role.
I hope I could help you.
I want to send data to 2 endpoint from logstash, one of which is HTTP endpoint & other is HTTPS.
I tried putting username & password for HTTPS endpoint in url itself but logstash is taking those fields [username & password] for the other endpoint also.
my current output field if like:
output {
elasticsearch{
index => "index_name"
document_id => "%{index_id}"
hosts => ["https://elastic:pass#clusterid.asia-northeast1.gcp.cloud.es.io:9243",
"http://127.0.0.1:9200"]
}
}
Getting this message in logs:
Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx#clusterid.asia-northeast1.gcp.cloud.es.io:9243/, https://elastic:xxxxxx#127.0.0.1:9200/]}}
And this:
[logstash.agent] Failed to execute action {:id=>:"cloud-elastic", :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<cloud-elastic>, action_result: false", :backtrace=>nil}
please try using different elasticsearch output for https and http,below settings
if "Https" in [tag]{
elasticsearch {
hosts => [ "https://elastic:pass#clusterid.asia-northeast1.gcp.cloud.es.io:9243" ]
user => "${ES_USER:admin}"
password => "${ES_PWD:admin}"
ssl => true
ssl_certificate_verification => false
cacert => "${CERTS_DIR}/certs/ca.crt"
index => "%{[docInfo][index]}"
action => "index"
}
} else {
elasticsearch {
hosts => [ "http://127.0.0.1:9200" ]
index => "%{[docInfo][index]}"
action => "index"
}
}
In .bashrc file
set the below environment variables
export ES_USER=elastic
export ES_PWD=pass
export CERTS_DIR=/home/abc
I would like to change how Gitlab verifies authentication with the AD, Since it sends the request as "CN=user ou=xx dc=xx". But the AD needs it to be sent as Domain\user. How can I change Gitlab config to send "domain\username' in bind request ?
Or why would the Windows AD reject the authentication?
Below is my LDAP configuration
gitlab_rails['ldap_servers'] = {
'main' => {
'label' => 'AD',
'host' => '10.0.0.1',
'port' => 389,
'uid' => 'sAMAccountName',
'base' => 'DC=AAA,DC=ORG,DC=LOCAL',
'bind_dn' => 'AAA\abcdefgh',
'password' => 'Password4',
'block_auto_created_users'=> 'true',
'active_directory' => true,
'lowercase_usernames' => true,
}
}
The wireshark image is below.
Bind Password sent packet 4.
Bind Password sent packet 18
Active Directory doesn't need the logon to be in the format domain\uid -- that's one of the three valid ID formats you can use when binding to AD via LDAP, but uid#domain and LDAP fully qualified DN are equally valid. What GitLab does is binds to AD using the bind_dn and password in the config (wireshark #4), searches at the base configured as 'base' for (&(sAMAccountName=*user supplied uid*)) (wireshark #11), returns the fully qualified DN of the identified account (wireshark #16), and then validates the user's credentials by binding with the fully qualified DN and user-supplied password (wireshark #18).
52e (in the bind error data) is returned when the user ID matches a valid user but the password is incorrect. Is the correct user being found for the ID supplied (i.e. the user is in the dc=aaa,dc=org,dc=local domain, and can be found under Head Office\Users OU)? If you select packet 18, you'll see the password within the packet data -- verify something didn't get mangled in transit or mistyped.
I'm using Amazon Elasticsearch Service 2.3.4 and Logstash 2.3.0 .
My configuration
input {
jdbc {
# Postgres jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:mysql://awsmigration.XXXXXXXXX.ap-southeast-1.rds.amazonaws.com:3306/admin?zeroDateTimeBehavior=convertToNull"
# The user we wish to execute our statement as
jdbc_user => "dryrun"
jdbc_password => "dryruntesting"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/opt/logstash/drivers/mysql-connector-java-5.1.39/mysql-connector-java-5.1.39-bin.jar"
# The name of the driver class for Postgresql
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
statement => "SELECT * from Receipt"
jdbc_paging_enabled => true
jdbc_page_size => 200
}
}
output {
elasticsearch {
index => "slurp_receipt"
document_type => "Receipt"
document_id => "%{uid}"
hosts => ["https://search-XXXXXXXXXXXX.ap-southeast-1.es.amazonaws.com:443"]
aws_access_key_id => 'XXXXXXXXXXXXXXXXX'
aws_secret_access_key => 'XXXXXXXXXXXXXXX'
}
}
I got this error :
Fri Aug 26 07:30:13 UTC 2016 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Unknown setting 'aws_access_key_id' for elasticsearch {:level=>:error}
Unknown setting 'aws_secret_access_key' for elasticsearch {:level=>:error}
Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: Something is wrong with your configuration.>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/config/mixin.rb:134:in `config_init'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/outputs/base.rb:63:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:74:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/agent.rb:473:in `start_pipeline'"], :level=>:error}
How to solve it ?
Assuming you are using the amazon_es plugin your output should look like this:
output {
amazon_es {
index => "slurp_receipt"
hosts => ["https://search-XXXXXXXXXXXX.ap-southeast-1.es.amazonaws.com:443"]
aws_access_key_id => 'XXXXXXXXXXXXXXXXX'
aws_secret_access_key => 'XXXXXXXXXXXXXXX'
}
}
aws_access_key_id and aws_secret_access_key are not valid configuration options for the Logstash elasticsearch plugin.
cf documentation
I am using softlayer's ruby API, and i am trying to create a virtual server under a specific subnet in a VLAN and i couldnt find a way of doing it.
At the moment i am using the following json:
creation_hash = {
'complexType' => 'SoftLayer_Virtual_Guest',
'hostname' => XXX,
'domain' => XXXX
'datacenter' => { 'name' => #datacenter },
'startCpus' => sl_machine_type(#params['instance_type'])['cpu'],
'maxMemory' => sl_machine_type(#params['instance_type'])['memory'],
'hourlyBillingFlag' => true,
'blockDeviceTemplateGroup' => { 'globalIdentifier' => #params['image_id'] },
'localDiskFlag' => false,
'dedicatedAccountHostOnlyFlag' => true,
'primaryBackendNetworkComponent' => {
'networkVlan' => {
'id' => #private_vlan['id']
}
},
'networkComponents' => [{ 'maxSpeed' => 1000 }],
'privateNetworkOnlyFlag' => true
}
so when i choose a VLAN, it chooses a random subnet under that VLAN.
how can i specify a subnet ? i didnt find this option in the documentation.
Unfortunately it is not possible to specify which subnet a server should be provisioned into.
The provisioning system will choose an IP from the VLAN's primary subnet.
The wording is a bit vague in this article, but it states that IPs are automatically assigned. I will get it updated to state that it is not possible to request a specific block of IPs for the primary.
Adding an IP to the server from a secondary subnet directly after provisioning could be a possible work around. This could be done with a post install script or config manager(salt, chef, etc), if automation is needed. It would also allow you to control specifically which IPs are used for each server.