create a virtual server under a specific subnet - ruby

I am using softlayer's ruby API, and i am trying to create a virtual server under a specific subnet in a VLAN and i couldnt find a way of doing it.
At the moment i am using the following json:
creation_hash = {
'complexType' => 'SoftLayer_Virtual_Guest',
'hostname' => XXX,
'domain' => XXXX
'datacenter' => { 'name' => #datacenter },
'startCpus' => sl_machine_type(#params['instance_type'])['cpu'],
'maxMemory' => sl_machine_type(#params['instance_type'])['memory'],
'hourlyBillingFlag' => true,
'blockDeviceTemplateGroup' => { 'globalIdentifier' => #params['image_id'] },
'localDiskFlag' => false,
'dedicatedAccountHostOnlyFlag' => true,
'primaryBackendNetworkComponent' => {
'networkVlan' => {
'id' => #private_vlan['id']
}
},
'networkComponents' => [{ 'maxSpeed' => 1000 }],
'privateNetworkOnlyFlag' => true
}
so when i choose a VLAN, it chooses a random subnet under that VLAN.
how can i specify a subnet ? i didnt find this option in the documentation.

Unfortunately it is not possible to specify which subnet a server should be provisioned into.
The provisioning system will choose an IP from the VLAN's primary subnet.
The wording is a bit vague in this article, but it states that IPs are automatically assigned. I will get it updated to state that it is not possible to request a specific block of IPs for the primary.
Adding an IP to the server from a secondary subnet directly after provisioning could be a possible work around. This could be done with a post install script or config manager(salt, chef, etc), if automation is needed. It would also allow you to control specifically which IPs are used for each server.

Related

Logstash for Vagrant: Address already in use

I have a Vagrant image in which there is an application; it is reachable in the Vagrant image if you call the port 2401 and depending on the service that you want, you call a specific address (i.e. "curl -X GET http://127.0.0.1:2401/provider/ipfix"). To retrieve the output outside the Vagrant machine I have set a port forwarding in the Vagrant file ("config.vm.network :forwarded_port, guest: 2401, host: 8080"), thus using the command "curl -X GET http://127.0.0.1:8080/provider/ipfix" from host I get the same output.
I am now on the phase of installing Logstash. My issue is that when I run Logstash with the config file I get the error "Address already in use". I tried to use also fields to guide to the specific output. Below is my Logstash config file. What workaround would you suggest?
input {
tcp {
host => localhost
port => 8080
add_field => {
"field1" => "provider"
"field2" => "ipfix"
}
codec => netflow {
versions => [10]
target => ipfix
}
type => ipfix
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
index => "IPFIX-logstash-%{+YYYY.MM.dd}"
}
}
If I'm reading this right, you're expecting Logstash to use TCP to connect to localhost:8080 to fetch information that it will then process.
That's not what this input does. This creates a listener on 127.0.0.1:8080, so the error message about 'already in use' is quite correct.
Considering you're using curl as an example of fetching this data, I suggest the http_poller plugin is better for what you want.
input {
http_poller {
urls => {
IPFIX => "http://127.0.0.1:8080/provider/ipfix"
}
request_timeout => 30
schedule => { "every" => "5s" }
add_tags => [ 'ipfix' ]
}
}
This will hit the known-working CURL URL every 5 seconds with a GET request.

Tomact7 chef cookbook ssl problems

I'm trying to set up a chef recipe for automatic deployment of my app over ssl with tomcat chef cookbook.
It works fine without ssl, but when I try to set the attributes for ssl support I'm getting error:
undefined method `truststore_password' for Custom resource tomcat_instance from cookbook tomcat.
My role:
name "myapp"
override_attributes ({
"java" => {
"jdk_version"=> "6"
},
"oracle" => {
"accept_oracle_download_terms" => true
},
"tomcat" => {
"base_version" => 7,
"java_options" => "${JAVA_OPTS} -Xmx128M -Djava.awt.headless=true",
"secure" => true,
"client_auth" => true,
"scheme" => "https",
"ssl_enabled_protocols" => "TLSv1",
"keystore_password" => "mypass",
"truststore_password" => "mypass",
"ciphers" => "SSL_RSA_WITH_RC4_128_SHA",
"keystore_file" => "/etc/tomcat7/client.jks",
"truststore_file" => "/etc/tomcat7/cert.jks"
}
})
run_list "recipe[java]", "recipe[tomcat]"
Maybe I'm missing something, because I can't find any good tutorials on how to do this I'm also using chef-solo with vagrant.
If you look at the Tomcat cookbook documentation, you will see the following regarding the truststore_password attribute:
node['tomcat']['truststore_password'] - Generated by the secure_password method from the
openssl cookbook; if you are using Chef Solo,
set this attribute on the node
Perhaps this means that you can not set the attribute in your role definition whilst using Chef Solo, and you have to manually add it to the node attributes JSON file.

export the all ES indexes and documents from remote server(linux) to local server(windows)

How to export the all documents (300 000 docs) from remote elastic server that is deployed in Linux server and import those documents to local server that is deployed in windows.I want to replicate the same environment in local server that exists in remote server.
I would suggest using Logstash to achieve this, using the configuration below. Make sure to replace the source and target hosts, as well as the index and type names to match your local environment.
File: copy.conf
input {
elasticsearch {
hosts => "linux_host:9200" <---- your remote Linux host
index => "index_to_copy"
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp" ]
}
}
output {
elasticsearch {
host => "localhost" <--- your local Windows host
port => 9200
protocol => "http"
manage_template => false
index => "index_to_copy"
}
}
And then you can simply launch it with
bin/logstash -f copy.conf
Another possibility is to use the snapshot & restore feature.

increase rabbitmq throughput at multi machine

I'm using logstash and elasticsearch to build a log system. RabbitMQ used to queue log message between two logstashs.
The message path is like below:
source log -> logstash -> rabbitMQ -> logstash(parse) -> elasticsearch
But i figure out that, no matter how much machine i added to rabbitMQ, it just use one machine resource to process messages.
I'm found some article say cluster just increase reliability and redundancy to prevent message lost.
But what i want is increase entire RabbitMQ cluster's throughput(in and out) by add more machine.
How do i configure my RabbitMQ cluster to increase it throughput?
Any comments are appreciated.
--
PS. i need to add more information here.
In my system limit i test is, can receive 7000/s messages and output 1700/s messages in 4 machine cluster system, but not enable HA and just bind 1 exchange to 1 queue and the queue just bind to 1 node. i guess 1 queue bind to 1 node is the throughput bottleneck. And its difficult to change the routing key now, so we have just one routing key and want to distribute message to different nodes to increase whole system throughput.
below is my logstash-indexer config
rabbitmq {
codec => "json"
auto_delete => false
durable => true
exchange => "logstash-exchange"
key => "logstash-routingKey"
queue => "logstash-queue"
host => "VIP-of-rabbitMQ"
user => "guest"
password => "guest"
passive => false
exclusive => false
threads => 4
prefetch_count => 512 }
You need to add more queues. I guess you using only one queue. So in other word you tied to one erlang process. What you want is use multiple queues:
Here is a quick and dirty example how to add some logic to logstash to send message to different queue:
filter {
# check if path contains source subfolder
if "foo" in [path] {
mutate { add_field => [ "source", "foo"] }
}
else if "bar" in [path] {
mutate { add_field => [ "source", "bar"] }
}
else {
mutate { add_field => [ "source", "unknown"] }
}
}
Then in your output:
rabbitmq {
debug => true
durable => true
exchange_type => "direct"
host => "your_rabbit_ip"
key => "%{source}"
exchange => "my_exchange"
persistent => true
port => 5672
user => "logstash"
password => "xxxxxxxxxx"
workers => 12
}
Updated:
Take a look at the repositories that this guy has:
https://github.com/simonmacmullen
I guess you will be interested in this one:
https://github.com/simonmacmullen/random-exchange
This exchange type is for load-balancing among consumers.

Fog VSphere provider vm_clone request cannot use datastore in folder

The following code works great but only when the datastore specified is in the root of the datacenter. We organise our datastores in folders with the same name as the cluster they are associated with.
Tried putting a path in (e.g. dc_name/ds_name) but no good.
server=connection.vm_clone( 'datacenter' => 'EWL',
'template_path' => '.Templates/RHEL 6.2 x64',
'name' => 'new_vm_name',
'datastore' => 'E2-CL01-T2-OS-015',
'dest_folder' => 'Self-Service',
'transform' => 'sparse',
'power_on' => false )
Any clues?

Resources