I'm trying to create spot instances in different region using boto3.
My default regions defined as us-east-1.
When I'm trying to create the spot instances in different region from the default one , an exception is thrown with this message:
botocore.exceptions.ClientError: An error occurred (InvalidParameterValue) when calling the RequestSpotInstances operation: Invalid availability zone: [eu-west-2b]
The instances created using this code snippet:
for idx in range(len(regions)):
client.request_spot_instances(
DryRun=False,
SpotPrice=price_bids,
InstanceCount=number_of_instances,
LaunchSpecification=
{
'ImageId': ami_id,
'KeyName': 'matrix',
'SecurityGroupIds': ['sg-5f07f52c'],
'SecurityGroups': ['MatrixSG'],
'InstanceType': machine_type,
'Placement':
{
'AvailabilityZone': regions[idx],
},
},
)
I assume that your regions list actually contains a list of Availability Zones rather than regions (since it says 'AvailabilityZone': regions[idx]).
Each AWS region operates independently. When making a connection to an AWS service, you must connect to the specific service in a specific region.
For example:
client = boto3.client('ec2')
This connects to EC2 in your default region.
Alternatively, you can specify a region:
client = boto3.client('ec2', region_name = 'eu-west-2')
You are receiving the Invalid availability zone error because your client is connected to a region (eg us-east-1) but is referencing an Availability Zone that is in a different region (eg eu-west-2b).
Your code is only creating the client once, but is attempting to connect to multiple regions. The solution is to create a new client connection for the region you wish to use. If your loop connects to multiple regions, then the client should be defined within the loop rather than outside the loop.
Related
I have three stacks created in AWS CDK:
shared-stack.ts - Defines VPC shared Stack
routing-stack.ts - Defines Route53 configuration
ec2-stack.ts - Defines EC2 Instance
For Route53, I am trying to create ARecord that points to EC2 instance that was deployed in another stack. How can I make this happen? The IP address keeps changing as I deploy new and assigning the static IP address doesn't look like a good solution.
routing-stack.ts
// Create route 53 zone
const vegafolioZone = new route53.PublicHostedZone(this, "MyZone", {
zoneName: "mywebsite.com",
});
// A -> Point to EC2 instance IP address
new route53.ARecord(this, "ARecord", {
zone: vegafolioZone,
recordName: "app.mywebsite.com",
target: //WHAT TO ADD HERE?
});
I'm using WAS ND and want to have dmgr profile with federated managed profile app.
I am creating cluster using:
AdminTask.createCluster('[-clusterConfig [-clusterName %s -preferLocal true]]' % nameOfModulesCluster)
Next, I'm configuring my WAS instance, queues, datasources, jdbc, JMS Activation Specs, factories etc.
By the time I want to create cluster member, I'm displaying:
print("QUEUES: \n" + AdminTask.listSIBJMSQueues(AdminConfig.getid('/ServerCluster:ModulesCluster/')))
print("JMS AS: \n" + AdminTask.listSIBJMSActivationSpecs(AdminConfig.getid('/ServerCluster:ModulesCluster/')))
And it returns all queues I've created earlier. But when I'm calling
AdminTask.createClusterMember('[-clusterName %(cluster)s -memberConfig [-memberNode %(node)s -memberName %(server)s -memberWeight 2 -genUniquePorts true -replicatorEntry false] -firstMember [-templateName default -nodeGroup DefaultNodeGroup -coreGroup DefaultCoreGroup -resourcesScope cluster]]' % {'cluster': nameOfCluster, 'node': nameOfNode, 'server': nameOfServer})
AdminConfig.save()
configuration displayed earlier is... gone. Some configuration (like datasources) is still displayed in ibm/console, but queues and jms as are not. The same print is displaying nothing, but member is added to cluster.
I can't find any information using google. I've tried AdminNodeManagement.syncActiveNodes(), but it won't work since I'm using
/opt/IBM/WebSphere/AppServer/bin/wsadmin.sh -lang jython -conntype NONE -f global.py
and AdminControl is not available.
What should I do in order to keep my configuration created before clustering? Do I have to sync it somehow?
This is the default behavior and is due to the -resourcesScope attribute in the createClusterMember command. This attribute determines how the server resources are promoted in the cluster, while adding the first cluster member.
Valid options for resourcesScope are :
Cluster: moves the resources of the first cluster member to the cluster level. The resources of the first cluster member replace the resources of the cluster. (is the default option)
Server: maintains the server resources at the new cluster member level. The cluster resources remain unchanged.
Both: copies the resources of the cluster member (server) to the cluster level. The resources of the first cluster member replace the resources of the cluster. The same resources exist at both the cluster and cluster member scopes.
Since you have set "-resourcesScope cluster" in your createClusterMember command, all configuration created at cluster scope are being deleted/replaced by the empty configurations of the new cluster member.
So, for your configurations to work, set "-resourcesScope server", such that the cluster configurations are not replaced by the cluster member configurations.
AdminTask.createClusterMember('[-clusterName %(cluster)s -memberConfig [-memberNode %(node)s -memberName %(server)s -memberWeight 2 -genUniquePorts true -replicatorEntry false] -firstMember [-templateName default -nodeGroup DefaultNodeGroup -coreGroup DefaultCoreGroup -resourcesScope server]]' % {'cluster': nameOfCluster, 'node': nameOfNode, 'server': nameOfServer})
AdminConfig.save()
Refer "Select how the server resources are promoted in the cluster" section in https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/urun_rwlm_cluster_create2_v61.html for more details.
I created a private image of a Google compute engine persistent disk, called primecoin01.
Later on, I'm trying to create a new image. It fails by saying the regexp is invalid both in listing the images and during gcloud.compute.instances.delete - the 1st step in using my persistent disk to create an image. It let me create the image name and now I'm unable to use the commands gcloud compute images list or gcloud compute instances delete instance-0 --keep-disks boot. I do not know a way to delete this image from my list.
primecoin01 certainly meets the regular expression criteria, and I have no clue why it named the image apparently ``primecoin01 All help greatly appreciated.
Details below:
C:\Program Files\Google\Cloud SDK>gcloud compute images list
NAME PROJECT ALIAS DEPRECATED STATUS
centos-6-v20141021 centos-cloud centos-6 READY
centos-7-v20141021 centos-cloud centos-7 READY
coreos-alpha-494-0-0-v20141108 coreos-cloud READY
coreos-beta-444-5-0-v20141016 coreos-cloud READY
coreos-stable-444-5-0-v20141016 coreos-cloud coreos READY
backports-debian-7-wheezy-v20141021 debian-cloud debian-7-backports READY
debian-7-wheezy-v20141021 debian-cloud debian-7 READY
container-vm-v20141016 google-containers container-vm READY
opensuse-13-1-v20141102 opensuse-cloud opensuse-13 READY
rhel-6-v20141021 rhel-cloud rhel-6 READY
rhel-7-v20141021 rhel-cloud rhel-7 READY
sles-11-sp3-v20140930 suse-cloud sles-11 READY
sles-11-sp3-v20141105 suse-cloud sles-11 READY
sles-12-v20141023 suse-cloud READY
ERROR: (gcloud.compute.images.list) Some requests did not succeed:
- Invalid value '``primecoin01'. Values must match the following regular expression: '(?:(?:[-a-z0-9]{1,63}\.)*(?:[a-z]
(?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))'
and
C:\Program Files\Google\Cloud SDK>gcloud compute instances delete instance-0 --keep-disks boot
ERROR: (gcloud.compute.instances.delete) Unable to fetch a list of zones. Specifying [--zone] may fix this issue:
- Invalid value '``primecoin01'. Values must match the following regular expression: '(?:(?:[-a-z0-9]{1,63}\.)*(?:[a-z] (?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))'
and
C:\Program Files\Google\Cloud SDK>gcloud compute instances delete instance-0 --keep-disks boot --zone us-central1-b
The following instances will be deleted. Attached disks configured to
be auto-deleted will be deleted unless they are attached to any other
instances. Deleting a disk is irreversible and any data on the disk
will be lost.
- [instance-0] in [us-central1-b]
Do you want to continue (Y/n)? y
ERROR: (gcloud.compute.instances.delete) Failed to fetch some instances:
- Invalid value '``primecoin01'. Values must match the following regular expression: '(?:(?:[-a-z0-9]{1,63}\.)*(?:[a-z]
(?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))'
It seems to be a validation error when creating the disk and the name is not correct. Are you still having the same issue?
A way to create a 'snapshot' of your disk will be to use the dd Linux command to burn your image and then tar the file to create an image from this file.
Let's assume I have several elasticsearch machines in a cluster: 192.168.1.1, 192.168.1.2 and 192.168.1.3
Any of the machines can go down. It doesn't look like NEST supports providing a range of IPs to try to connect.
So how do I make sure I connect to any of the available machines from Nest? Just try to open connection to one, if TryConnect didn't work, try another?
You can run a local ES instance at your application server (eg your web server) and config it to work as a load balancer:
Set node.client: true (or node.master: false and node.data: false) for this local ES config to make it a load balancer. This mean ES will not become master nor contains data
Config it to join the cluster (your 3 nodes don't need to know this ES)
Config NEST to use local ES as your search server
Then this ES become a part of your cluster, and will distribute your requests to suitable nodes
If you don't want "load balancer", then you have to manually checking on client side to determine which node is alive.
Since you have a small set of nodes, you can use a StaticConnectionPool:
var uri1 = new Uri("192.168.1.1");
var uri2 = new Uri("192.168.1.2");
var uri3 = new Uri("192.168.1.3");
var uris = new List<Uri> { uri1, uri2, uri3 };
var connectionPool = new StaticConnectionPool(uris);
var connectionSettings = new ConnectionSettings(connectionPool); // <-- need to be reused
var client = new ElasticClient(connectionSettings);
An important point to keep in mind is to reuse the same ConnectionSetting when creating a new elastic client, since elasticsearch cache is per ConnectionSetting. See this GitHub post:
...In any case its important to share the same ConnectionSettings
instance across any elastic client you instantiate. ElasticClient can
be a singleton or not as long as each instance shares the same
ConnectionSettings instance.
All of our caches are per ConnectionSettings, this includes
serialization caches.
Also a single ConnectionSettings holds a single IConnectionPool and
IConnection something you definitely want to reuse across requests.
I would set up one of the nodes as a load balancer. Meaning that the URL your are calling should allways be up.
Though if you increase the number of replicas you can call any of the nodes by URL and still access the same data. ElasticSearch does not care which one you access while in a cluster. So you could build your own range of ips in your application.
I am trying to write a balancer tool for Hbase which could balance regions across regionServers for a table by region count and/or region size (sum of storeFile sizes). I could not find any Hbase API class which returns the regions size or related info. I have already checked a few of the classes which could be used to get other table/region info, e.g. org.apache.hadoop.hbase.client.HTable and HBaseAdmin.
I am thinking, another way this could be implemented is by using one of the Hadoop classes which returns the size of the directories in the fileSystem, for e.g. org.apache.hadoop.fs.FileSystem lists the files under a particular HDFS path.
Any suggestions ?
I use this to do managed splits of regions, but, you could leverage it to load-balance on your own. I also load-balance myself to spread the regions ( of a given table ) evenly across our nodes so that MR jobs are evenly distributed.
Perhaps the code-snippet below is useful?
final HBaseAdmin admin = new HBaseAdmin(conf);
final ClusterStatus clusterStatus = admin.getClusterStatus();
for (ServerName serverName : clusterStatus.getServers()) {
final HServerLoad serverLoad = clusterStatus.getLoad(serverName);
for (Map.Entry<byte[], HServerLoad.RegionLoad> entry : serverLoad.getRegionsLoad().entrySet()) {
final String region = Bytes.toString(entry.getKey());
final HServerLoad.RegionLoad regionLoad = entry.getValue();
long storeFileSize = regionLoad.getStorefileSizeMB();
// other useful thing in regionLoad if you like
}
}
What's wrong with the default Load Balancer?
From the Wiki:
The balancer is a periodic operation which is run on the master to redistribute regions on the cluster. It is configured via hbase.balancer.period and defaults to 300000 (5 minutes).
If you really want to do it yourself you could indeed use the Hadoop API and more specifally, the FileStatus class. This class acts as an interface to represent the client side information for a file.