I have setup IKS and logged in to one of the container's command line.
I need to execute a script on that container that connects to RedisCache's client.
Here is the script (testScript.py) I want to execute -
import redis
r = redis.Redis(host=‘master.some.path.of.redis.url.amazonaws.com’, port=6323, password=‘somePassword’, ssl=True)
r.set(‘foo’, ‘bar’)
value = r.get(‘foo’)
print(value)
I need help in understanding how to be able to setup Redis on the IKS.
Related
Okay, after a week, or more, my Aurora Cluster is running. This was not really easy but, nevertheless, I got it.
I have a simple aurora file
# copy frontend into the local sandbox
clone_service = Process(
name = 'copy service',
cmdline = 'git clone https://citrullin#bitbucket.org/jakiku/frontend.git frontend')
install_npm_deps = Process(
name = 'install npm dependencies',
cmdline = 'cd frontend && npm install'
)
run_server = Process(
name = 'run server',
cmdline = 'node server.js'
)
# describe the task
run_frontend_service = SequentialTask(
processes = [clone_service, install_npm_deps, run_server],
resources = Resources(cpu = 1, ram = 128*MB, disk=64*MB))
jobs = [
Service(cluster = 'mesos-fr',
environment = 'devel',
role = 'www-data',
name = 'frontend_service',
task = run_frontend_service)
]
Nothing special. I want only define which port I need to use. I checked Resources(port = 3000) but it doesn't work. It's not really a resource, it's an attribute in mesos
Generally speaking you want to avoid static ports with Aurora jobs. Since any number of tasks could land on the same host, there's no good way to guarantee that multiple tasks wouldn't request the same port causing one of them to randomly fail.
The recommended way to solve this problem is to request a port from Mesos using the thermos namespace in your aurora config. For example, if you were to do something like:
run_server = Process(
name = 'run server',
cmdline = 'node server.js --port={{thermos.ports[http]}}'
)
Then Aurora will assign a random port to your task when it is assigned to a host.
The obvious question this raises is how do other things find your service if it's running on a randomly assigned port that can change over time as your task is moved around between hosts. The answer to this is service discovery. If you add announce=Announcer() to your job configuration, then your task will be added to a ServerSet which other tasks can use to discover and communicate with it.
Reference:
Mesos documentation on configuring agents to offer ports.
Aurora documentation on requesting ports here.
I'd like to configure Jenkins from a bash script.
The action is to enable global security and use Unix user/group database for it.
Can I just copy some configuration XMLs and restart the server? Or there is a groovy script to do that?
That's it:
import jenkins.model.*
import hudson.security.*
def instance = Jenkins.getInstance()
def unixRealm = new PAMSecurityRealm("ssh")
instance.setSecurityRealm(unixRealm)
instance.setAuthorizationStrategy(new FullControlOnceLoggedInAuthorizationStrategy())
instance.save()
I need to give a server name to a maven build. During the maven build this server name will be used to make a call that server do some tests on that server.
Our servers have jenkins slaves on them and is grouped using labels
Example
Slaves/Node | Label
Server1 | BackEndServers
Server2 | BackEndServers
Server3 | FrontEndServers
Server4 | FrontEndServers
With Elastic Axis plugin i can say run my Jenkins job on this Node Label (for example on BackEndServers) and the same project will be executed on both of the servers (Server1 & Server2).
In my case I cannot do this as maven is not installed on the BackEndServers where my code is running. But the maven build needs to know about the server names though.
So is there a way how I can get the server names from a label and then run the same job multiple times passsing each servername to the maven build?
Example
Giving that I have the label 'BackEndServers'
obtain a list of node names 'Server1,Server2'
and run my job for each node name and pass a parameter to it
aka
Having Job (with parameter Server1)
Having Job (with parameter Server2)
Use Jenkins environment variables like NODE_NAME in the Maven command of the build job as value for a system property. For example:
mvn clean install -Djenkins.node.name=${NODE_NAME}
In your Maven project (pom.xml) configure the plugin, which requires the node name, with the help of following property: ${jenkins.node.name}
Here are some links - how to trigger Jenkins builds remotely:
How to trigger Jenkins builds remotely and to pass paramter
Triggering builds remotely in Jenkins
Launching a build with parameters
I don't, if it is possible in the way you want it. But the provided information should help you to find a solution.
Try Jenkins.getInstance().getComputer(env.NODE_NAME).getNode() See more on official Doc
In the end I created a 2 jobs.
To interogate the Jenkens nodes for me and build up a string of servers to use
Then use Dynamic Axis lable with the list I have in Job 1 to execute my maven build
In Job 1 - I used The EnjEnv plugin and it has a 'Evaludated Groovy Script' section that basically you can do anything... but it should return a property map. I don't know how to return a value from a Groovy script so this worked kewl for me as I can reference property (or Environment variables) from almost anyware
import hudson.model.*
String labelIWantServersOf = TheLabelUsedOnTheElasticAxisPlugin; // This is the label assosiated with nodes for which i want the server names of
String serverList = '';
for (aSlave in hudson.model.Hudson.instance.slaves) {
out.println('Evaluating Server(' + aSlave.name + ') with label = ' + aSlave.getLabelString());
if (aSlave.getLabelString().indexOf(labelIWantServersOf ) > -1) {
serverList += aSlave.name + ' ';
out.println('Valid server found: ' + aSlave.name);
}
}
out.println('Final server list where SOAP projects will run on = ' + serverList + ' which will be used in the environment envInject map');
Map<String, String> myMap = new HashMap<>(2);
myMap.put("serverNamesToExecuteSoapProjectOn", serverList );
return myMap;
Then I had some issue to pass the Environment var onto my next job. So I simply wrote the values that I wanted to a property file using a windows batc script in the Build process
echo serverNamesToExecuteSoapProjectOn=%serverNamesToExecuteSoapProjectOn%> baseEnvMap.properties
Then as a post build action I had a "Trigger parameterized build on other projects' calling my 2nd job and I passed the baseEnvMap.properties to it.
Then on my Job 2 which is a Multiconfig job I added a Dynamic Axis using the environment var that was passed via the property file to job 2.
This will duplicate Job 2 and execute it each time with the value that the groovy script build up which I can reference in my mvn arguments.
To list out all nodes of label name LABELNAME:
http://ServerIP:8080/label/LABELNAME/api/json?pretty=true
I am a beginner and I have a simple application I have developed locally which uses mongodb with mongoKit as follows:
app = Flask(__name__)
app.config.from_object(__name__)
customerDB = MongoKit(app)
customerDB.register([CustomerModel])
then in views I just use the CustomerDB
I have put everything on heroku cloud but my database connection doesn't work.
I got the link I need to connect by:
heroku config | grep MONGOLAB_URI
but I am not sure how to pull this. I looked at the following post, but I am more confused
How can I use the mongolab add-on to Heroku from python?
Any help would be appreciated.
Thanks!
According to the documentation, Flask-MongoKit supports a set of configuration settings.
MONGODB_DATABASE
MONGODB_HOST
MONGODB_PORT
MONGODB_USERNAME
MONGODB_PASSWORD
The MONGOLAB_URI environment setting needs to be parsed to get each of these. We can use this answer to the question you linked to as a starting point.
import os
from urlparse import urlsplit
from flask import Flask
from flask_mongokit import MongoKit
app = Flask(__name__)
# Get the URL from the Heroku setting.
url = os.environ.get('MONGOLAB_URI', 'mongodb://localhost:27017/some_db_name')
# Parse it.
parsed - urlsplit(url)
# The database name comes from the path, minus the leading /.
app.config['MONGODB_DATABASE'] = parsed.path[1:]
if '#' in parsed.netloc:
# If there are authentication details, split the network locality.
auth, server = parsed.netloc.split('#')
# The username and password are in the first part, separated by a :.
app.config['MONGODB_USERNAME'], app.config['MONGODB_PASSWORD'] = auth.split(':')
else:
# Otherwise the whole thing is the host and port.
server = parsed.netloc
# Split whatever version of netloc we have left to get the host and port.
app.config['MONGODB_HOST'], app.config['MONGODB_PORT'] = server.split(':')
customerDB = MongoKit(app)
I have nearly identical versions of webapps on different sites.
What I'd like to do is specify the site at command line...
cucumber --server server1 --tags #tests
....
#servers = {'server1' => 'https://www.tests.com', 'server2' => 'https://www.foobar.com'}
....
Background:
Given I am on {#server1}
Scenario: Happy plan
When I go here
And I see this
Then I get that
What is the best way to running the same script on multiple similar websites? Can it be run from the command line?
Your best option is to use an environment variable for your server name:
cucumber SERVER=server1 --tags #tests
You can create a generic step:
Given I am on the configured test server
Then, in your step definition, you can look that up as you would in any normal Ruby code and set it as Capybara's base URL:
Given /^I am on the configured test server$/ do
server_name = ENV['SERVER']
url = #servers[server_name] or raise "Unknown test server: #{server_name}"
Capybara.app_host = url
end
Note that when using a remote server, you'll need to use a Capybara driver that supports it, such as Selenium: the default RackTest driver does not. You may also want to set run_server to false. See https://github.com/jnicklas/capybara#calling-remote-servers
Create some config and read it before executing scripts.
Put code for parsing config to features/support/env.rb, for example.