I am looking for a way to increment an integer value each time a deploy occurs. I need this variable to stay updated so that when the script updates it for another deploy that it is incremented again.
I need the ability to ensure that port numbers for my services are not the same for each deploy. i.e. Two services cannot listen on port 1122 and I need two services to be deployed. My current solution is to do something like this:
int deployCounter = #{DeployCounter};
int port = #{DeployPort} + deployCounter;
Octopus.SetVariable("DeployPort", port.ToString());
Thanks in advance!
Related
When a Kubernetes Spring-Boot app is launched with 8 instances, the app running in each node needs to fetch sequence number of the pod/container. There should be no repeating numbers for the pods/containers running the same app. Assume that a pod runs a single container, and a container runs only one instance of the app.
There are a few unique identifiers the app can pull from Kubernetes API for each pod such as:
MAC address (networkInterface.getHardwareAddress())
Hostname
nodeName (aks-default-12345677-3
targetRef.name (my-sample-service-sandbox-54k47696e9-abcde)
targetRef.uid (aa7k6278-abcd-11ef-e531-kdk8jjkkllmm)
IP address (12.34.56.78)
But the app getting this information from the API cannot safely generate and assign a unique number to itself within the specified range of pods [0 - Max Node Count-1]. Any reducer step (bitwise &) running over these unique identifiers will eventually repeat the numbers. And communicating with the other pods is an anti-pattern although there are approaches which take a consensus/agreement patterns to accomplish this.
My Question is:
Is there a simple way for Kubernetes to assign a sequential number for each node/container/pod when it's created - possibly in an environment variable in the pod? The numbers can to begin with 0 or 1 and should reach uptown the max count of the number of pods.
Background info and some research:
Executing UUID.randomUUID().hashCode() & 7 eight times will get you repeats of numbers between 0 & 7. Ref article with this mistake in createNodeId().
Sample outputs on actual runs of reducer step above.
{0=2, 1=1, 2=0, 3=3, 4=0, 5=1, 6=1, 7=0}
{0=1, 1=0, 2=0, 3=1, 4=3, 5=0, 6=2, 7=1}
{0=1, 1=0, 2=2, 3=1, 4=1, 5=2, 6=0, 7=1}
I've went ahead and executed a 100 Million runs of the above code and found that only 0.24% of the cases has even distribution.
Uneven Reducers: 99760174 | Even Reducers: 239826
app is launched with 8 instances, the app running in each node needs to fetch sequence number of the pod
It sounds like you are requesting a stable Pod identity. If you deploy your Spring Boot app as a StatefulSet instead of as a Deployment, then this identity is a "provided feature" from Kubernetes.
I have a bare metal server, and I want to install multiple services on this server.
My inventory looks like that
[Mygroup]
Server port_service=9990 service_name="service1"
Server port_service=9991 service_name="service2"
When I launch my ansible job,only service 2 is installed because I have the same server in each line of my group. There is way to force ansible to take all lines of a group?
I don't want to create a group for each service
Q: "There is a way to force Ansible to take all lines of a group?"
A: No. There is not. In a group, the hosts shall be unique. If there are multiple hosts with the same name the last one will be taken.
Put the variables into one line e.g.
[Mygroup]
Server port_services="[9990, 9991]" service_names="['service1', 'service2']"
(and change the code).
See How to build your inventory. There is a lot of other options e.g.
[Mygroup]
Server
[Mygroup:vars]
port_services="[9990, 9991]"
service_names="['service1', 'service2']"
I hope I got u right but this should do the trick.
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
Greets
Harry
Another solution is to use an alias.
This solution works fine for me
[Mygroup]
service_1 ansible_host=Server port_service=9990 service_name="service1"
service_2 ansible_host=Server port_service=9991 service_name="service2"
I have a server with multiple public IP addresses.
I want to send campaign emails on this server.
Sometimes i would like to send mail from a particular IP (it is a filter on the sender email address that gives which IP to use).
The only thing i find is to install multiple postfix instances (one per output IP). Is there a best way to do this ?
I have a second question: Postfix gives a unique queue id to each message. If i have several instances of postfix, do you think thoses uniques id can be the same in 2 postfix instances ?
Thanks
sender_dependent_default_transport_maps is your friend. First, add this to main.cf:
sender_dependent_default_transport_maps = hash:/etc/postfix/sender-transport
Next, create the file /etc/postfix/sender-transport with
#my-sender-domain.com smtp-192-168-0-1:
Any message received with sender #my-sender-domain.com will use the service smtp-192-168-0-1 (can be any name) for sending. Don't forget to postmap /etc/postfix/sender-transport the file.
And then, add the service to master.cf
smtp-192-168-0-1 unix - - n - - smtp
-o smtp_bind_address=192.168.0.1
Again, the service name can be anything, but it must match the one on the hash file. This smtp service will send the message from the IP 192.168.0.1. Change as needed.
Add as many services and lines in the hash file as you want. Don't forget to service postfix restart after that.
There are many other options you can add to the smtp service, like -o smtp_helo_name=my.public.hostname.com, etc.
I just finished to set up a postfix like this :-)
In a web page, i start an FTSearch using an agent called via ajax.
Is there a way to programmatically stop this agent (for example, using a button)?
Not really. You could programmatically restart the http task...
There's a server setting which defines a timeout for webagents, which should be set.
You could add a check to your code, testing an environment variable for its value... For example, use this (in pseudo-code):
count= count + 1
if count modulo 100 = 0 then // only once per 100 docs
if getEnvironmentVar("STOP_MY_AGENT")=1 then
exit
fi
fi
And then you need some button to set the environment variable. Make sure that in both cases you use the same environment, i.e. the same notes.ini file!
We are running a Spring 3.0.x web application (.war) with a nightly #Scheduled job in a clustered WebLogic 10.3.4 environment. However, as the application is deployed to each node (using the deployment wizard in the AdminServer's web console), the job is started on each node every night thus running multiple times concurrently.
How can we prevent this from happening?
I know that libraries like Quartz allow coordinating jobs inside clustered environment by means of a database lock table or I could even implement something like this myself. But since this seems to be a fairly common scenario I wonder if Spring does not already come with an option how to easily circumvent this problem without having to add new libraries to my project or putting in manual workarounds.
We are not able to upgrade to Spring 3.1 with configuration profiles, as mentioned here
Please let me know if there are any open questions. I also asked this question on the Spring Community forums. Thanks a lot for your help.
We only have one task that send a daily summary email. To avoid extra dependencies, we simply check whether the hostname of each node corresponds with a configured system property.
private boolean isTriggerNode() {
String triggerHostmame = System.getProperty("trigger.hostname");;
String hostName = InetAddress.getLocalHost().getHostName();
return hostName.equals(triggerHostmame);
}
public void execute() {
if (isTriggerNode()) {
//send email
}
}
We are implementing our own synchronization logic using a shared lock table inside the application database. This allows all cluster nodes to check if a job is already running before actually starting it itself.
Be careful, since in the solution of implementing your own synchronization logic using a shared lock table, you always have the concurrency issue where the two cluster nodes are reading/writing from the table at the same time.
Best is to perform the following steps in one db transaction:
- read the value in the shared lock table
- if no other node is having the lock, take the lock
- update the table indicating you take the lock
I solved this problem by making one of the box as master.
basically set an environment variable on one of the box like master=true.
and read it in your java code through system.getenv("master").
if its present and its true then run your code.
basic snippet
#schedule()
void process(){
boolean master=Boolean.parseBoolean(system.getenv("master"));
if(master)
{
//your logic
}
}
you can try using TimerManager (Job Scheduler in a clustered environment) from WebLogic as TaskScheduler implementation (TimerManagerTaskScheduler). It should work in a clustered environment.
Andrea
I've recently implemented a simple annotation library, dlock, to execute a scheduled task only once over multiple nodes. You can simply do something like below.
#Scheduled(cron = "59 59 8 * * *" /* Every day at 8:59:59am */)
#TryLock(name = "emailLock", owner = NODE_NAME, lockFor = TEN_MINUTE)
public void sendEmails() {
List<Email> emails = emailDAO.getEmails();
emails.forEach(email -> sendEmail(email));
}
See my blog post about using it.
You don't neeed to synchronize your job start using a DB.
On a weblogic application you can get the instanze name where the application is running:
String serverName = System.getProperty("weblogic.Name");
Simply put a condition two execute the job:
if (serverName.equals(".....")) {
execute my job;
}
If you want to bounce your job from one machine to the other, you can get the current day in the year, and if it is odd you execute on a machine, if it is even you execute the job on the other one.
This way you load a different machine every day.
We can make other machines on cluster not run the batch job by using the following cron string. It will not run till 2099.
0 0 0 1 1 ? 2099