I need the application server, which is beanstalk instances, to do some actions upon startup and I thought of running a bash script passed to the instance with the UserData property which is available to regular EC2 instances.
I've found several example CloudFormation templates which does this with regular EC2 instances, but no example with Beanstalk. I've tried to add this to the properties field for the application:
"MyApp" : {
"Type" : "AWS::ElasticBeanstalk::Application",
"Properties" : {
"Description" : "MyApp description",
"ApplicationVersions" : [{
...
}],
"UserData" : {
"Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"touch /tmp/userdata_sucess\n"
]]
}},
...
I also tried to add to the environment part:
"MyAppEnv" : {
"Type" : "AWS::ElasticBeanstalk::Environment",
"Properties" : {
"ApplicationName" : { "Ref" : "MyApp" },
"Description" : "MyApp environment description",
"UserData" : {
"Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"touch /tmp/userdata_sucess\n"
]]
}},
"TemplateName" : "MyAppConfiguration",
"VersionLabel" : "First Cloud version"
}
},
In both cases this resulted in failure when trying to create the stack. Does anyone know if it is possible to pass UserData to a Beanstalk instance using CloudFormation. If so - can you provide an example.
If you want to have all the advantages that Beanstalk offers - like not having to patch the OS which Amazon does for you - this isn't possible. One option is to create a custom AMI where you include the needed scripts, but then you must manage the OS yourself with security patches. Read more here.
You can do this with .ebextensions, see Amazon docs.
An example:
packages:
yum:
bison: []
libpcap-devel: []
libpcap: "1.4.0"
golang: "1.13.4"
git: []
commands:
20_show_info_pkgs:
env:
GOPATH: /usr/local/gocode
PATH: $PATH:/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin:/usr/local/bin
ignoreErrors: true
command: |
ls -l /usr/local /usr/local/g*
env
yum list bison libpcap-devel libpcap golang git
which git
which go
git --version
go version
goreplay version
true
Related
I'm deploying Docker swarm with ansible and I would like to ensure the ingress network has been created. In that aim, I configured the following task :
- name: Ensure ingress network exists
docker_network:
state: present
name: ingress
driver: overlay
driver_options:
ingress: true
And I'm getting the following error :
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: docker.errors.NotFound: 404 Client Error for http+docker://localhost/v1.41/networks/ingress/disconnect: Not Found ("No such container: ingress-endpoint")
fatal: [swarm-srv-1]: FAILED! => {"changed": false, "msg": "An unexpected docker error occurred: 404 Client Error for http+docker://localhost/v1.41/networks/ingress/disconnect: Not Found (\"No such container: ingress-endpoint\")"}
I've tried to add some arguments likes :
scope: swarm
force: yes
But no changes... I've also tried to delete the ingress with ansible (state: absent), but I always get the same error.
Note that I don't face any issue when trying to delete a recreate the ingress network manually on the swarm : docker network rm ingress
I don't know how to resolve that issue...Any help would be appreciated. Thanks !
Here are some informations that may help...
# docker version
Version: 20.10.6
API version: 1.41
Go version: go1.13.15
Git commit: 370c289
Built: Fri Apr 9 22:47:35 2021
OS/Arch: linux/amd64
# docker inspect ingress
[
{
"Name": "ingress",
"Id": "yb2tkhep8vtaj9q7w3mssc9lx",
"Created": "2021-05-19T05:53:27.524446929-04:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "dfdc0f123d21a196c7a815c7e0a886924d0799ae5f3be2d38b64d527ed4620b1",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {},
"Peers": [
{
"Name": "8f8932d6f99f",
"IP": "(ip address here)"
},
{
"Name": "28b9ca95dcf0",
"IP": "(ip address here)"
},
{
"Name": "f7c48c8af2f5",
"IP": "(ip address here)"
}
]
}
]
I had the exact same issue when trying to customize the IP range of the ingress network. It looks like the docker_network module does not support modification of swarm specific networks: there is a open Github issue for this.
I went for the ugly workaround of removing the network by executing it through a shell (docker network rm ingress command) and adding it again. When adding it with the docker_network module, I found that adding also seems not be working (fails to set the ingress property of the network). So I ended up doing both remove- and create operation through a shell command.
Since the removal will trigger a confirmation dialogue:
WARNING! Before removing the routing-mesh network, make sure all the nodes in your swarm run the same docker engine version. Otherwise, removal may not be effective and functionality of newly create ingress networks will be impaired.
Are you sure you want to continue? [y/N]
I used the expect module to confirm the dialogue:
- name: remove default ingress network
ansible.builtin.expect:
command: docker network rm ingress
responses:
"[y/N]": "y"
- name: create customized ingress network
shell: "docker network create --ingress --subnet {{ docker_ingress_network }} --driver overlay ingress"
It is not perfect but it works.
There was one last problem I experienced: when running it on an existing swarm I ended up having network issues on the node where I did run this (somehow the docker_gwbridge network on that node could not handle the change). The fix for this was to fully remove the node and re-join the swarm (regenerates the docker_gwbridge).
Am picking ECS optimised instance(ami-05958d7635caa4d04) in data plane of ECS in ca-central-1 region.
AWS Systems Manager Agent (SSM Agent) is Amazon software that can be installed and configured on an Amazon EC2 instance, an on-premises server, or a virtual machine (VM). SSM Agent makes it possible for Systems Manager to update, manage, and configure these resources.
In my scenario, Launching a ECS task in ECS optimised instance(ami-05958d7635caa4d04), causes resource:memory error. More on this error, here. Monitoring ECS->cluster->service->events will not work for me, because cloudformation roll back the cluster.
My existing ECS optimised instance is launched as shown below:
"EC2Instance":{
"Type": "AWS::EC2::Instance",
"Properties":{
"ImageId": "ami-05958d7635caa4d04",
"InstanceType": "t2.micro",
"SubnetId": { "Ref": "SubnetId"},
"KeyName": { "Ref": "KeyName"},
"SecurityGroupIds": [ { "Ref": "EC2InstanceSecurityGroup"} ],
"IamInstanceProfile": { "Ref" : "EC2InstanceProfile"},
"UserData":{
"Fn::Base64": { "Fn::Join": ["", [
"#!/bin/bash\n",
"echo ECS_CLUSTER=", { "Ref": "EcsCluster" }, " >> /etc/ecs/ecs.config\n",
"groupadd -g 1000 jenkins\n",
"useradd -u 1000 -g jenkins jenkins\n",
"mkdir -p /ecs/jenkins_home\n",
"chown -R jenkins:jenkins /ecs/jenkins_home\n"
] ] }
},
"Tags": [ { "Key": "Name", "Value": { "Fn::Join": ["", [ { "Ref": "AWS::StackName"}, "-instance" ] ]} }]
}
}
1) Does aws ssm agent installation required on ECS instance(ami-05958d7635caa4d04) to retrieve such cloudwatch events(resource:memory) with aws.ssm cloudwatch event rule filter? or Does aws.ec2 cloudwatch event rule filter suffice?
2) If yes, Do I need to explicitly install SSM agent on ECS instance(ami-05958d7635caa4d04)? through CloudFormation...
You don't need to install SSM agent to monitor something such as memory usage of your instance (whether container instance or not). This is domain of CloudWatch, not SSM.
All you need to install is unified cloud watch agent and configure it accordingly. This is where SSM can help but it is not necessary and you can install it manually (or via script if you want).
If you decide to use SSM then you will need to explicitly install it. It comes preinstalled on some OSes but not on Amazon ECS-Optimized AMI - more about this.
I am making a new composer-wallet with composer 0.19.0
All test passed fine - test based on composer-wallet-filesystem
I can successfully import business network cards to the new wallet and use them for transactions.
I am only one issue
$ composer card list
Error: Can't find end of central directory : is this a zip file ? If it is, see http://stuk.github.io/jszip/documentation/howto/read_zip.html
Command failed
I tryed to update jszip to the lastest version in composer-cli, but same problem
Here is the environment variable to configure the connection
export NODE_CONFIG='{
"composer": {
"wallet": {
"type": "composer-wallet-mongodb",
"desc": "Uses a local mongodb instance",
"options": {
"uri": "mongodb://localhost:27017/yourCollection",
"collectionName": "myWallet",
"options": {
}
}
}
}
}'
Any help is welcomed
My cluster Config file as follows
`
{
"name": "SampleCluster",
"clusterConfigurationVersion": "1.0.0",
"apiVersion": "01-2017",
"nodes":
[
{
"nodeName": "vm0",
"iPAddress": "here is my VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm1",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm2",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
],
"properties": {
"reliabilityLevel": "Bronze",
"diagnosticsStore":
{
"metadata": "Please replace the diagnostics file share with an actual file share accessible from all cluster machines.",
"dataDeletionAgeInDays": "7",
"storeType": "FileShare",
"IsEncrypted": "false",
"connectionstring": "c:\\ProgramData\\SF\\DiagnosticsStore"
},
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"isPrimary": true
}
],
"fabricSettings": [
{
"name": "Setup",
"parameters": [
{
"name": "FabricDataRoot",
"value": "C:\\ProgramData\\SF"
},
{
"name": "FabricLogRoot",
"value": "C:\\ProgramData\\SF\\Log"
}
]
}
]
}
}
It is almost identical to standalone service fabric download demo file for untrusted cluster except my VPS ip. I enabled remote registry service.I ran the
\TestConfiguration.ps1 -ClusterConfigFilePath \ClusterConfig.Unsecure.MultiMachine.json but i got the following error.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <Another IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces
LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : False
FirewallAvailable :
RpcCheckPassed :
NoConflictingInstallations :
FabricInstallable :
DataDrivesAvailable :
Passed : False
Test Config failed with exception: System.InvalidOperationException: Best Practices Analyzer determined environment has
an issue. Please see additional BPA log output in DeploymentTraces folder.
at System.Management.Automation.MshCommandRuntime.ThrowTerminatingError(ErrorRecord errorRecord)
I don't understand the problem.VPSs are not locally connected. All are public IP.I don't know, this may b an issue. how do I make virtual LAN among these VPS?Can anyone give me some direction about this error?Anyone helps me is greatly appreciated.
Edit: I used VM term insted of VPS.
Finally I make this working. Actually all the nodes are in a network, i thought it wasn't. I enable file sharing. I try to access the shared file from the node where I ran configuration test to the all other nodes. I have to give the credentials of logins. And then it works like a charm.
I have to transfer an Elasticsearch index on a Windows machine to an Ubuntu Machine. I decided to take a snapshot of the index and then try to restore it on the other system.
I was successfully able to snapshot the index on the windows machine.
On the windows machine in elasticsearch.yml I had path.repo: ["F:\\mount\\backups"].
So, under mount I had:
.
└── backups
└── old_backup
├── index
├── indices
│ └── old_index
│ ├── 0
│ ├── 1
│ ├── 2
│ ├── 3
│ ├── 4
│ └── meta-snapshot_to_ubuntu.dat
├── meta-snapshot_to_ubuntu.dat
└── snap-snapshot_to_ubuntu.dat
where snapshot_to_ubuntu is the name of the snapshot I made on Windows.
I placed this snapshot in ~/Documents/mount on the ubuntu machine and start an instance of ES 2.3.0 with path.repo: ["/home/animesh/Documents/mount/backups"] in elasticsearch.yml.
I run the following on the command line:
curl -XGET localhost:9200/_snapshot/old_backup/snapshot_to_ubuntu?pretty=1
and get
{
"error" : {
"root_cause" : [ {
"type" : "repository_missing_exception",
"reason" : "[old_backup] missing"
} ],
"type" : "repository_missing_exception",
"reason" : "[old_backup] missing"
},
"status" : 404
}
Where am I going wrong?
UPDATE:
I ran the following curl command:
curl -X POST http://localhost:9200/_snapshot/old_backup/snapshot_to_ubuntu/_restore
and I get:
{
"error": {
"root_cause": [
{
"type": "repository_missing_exception",
"reason": "[old_backup] missing"
}
],
"type": "repository_missing_exception",
"reason": "[old_backup] missing"
},
"status": 404
}
I had a similar issue and I would like to share with you how I figured it out.
I will write all steps, hope it may helps other people as well.
I had to transfer an Elasticsearch index on a GCP server to my Local Machine. I decided to take a snapshot of the index and then try to restore it on my Local machine.
I'm assuming you already have the snapshot/s
The steps are:
Create a directory on your local machine with the snapshot/s you want to restore
Navigate to elasticsearch.yml file. For example, on my local machine, you can find the file here: /usr/local/Cellar/elasticsearch/7.8.1/libexec/config/elasticsearch.yml
add the repository path: path.repo: [PATH_TO_BACKUP_DIR] on the elasticsearch.yml file. For example: path.repo: ["/mount/backups", "/mount/longterm_backups"]
save, exit, and restart elasticsearch
After all nodes are restarted, the following command can be used to register the shared file system repository with the name my_fs_backup
curl -X PUT "localhost:9200/_snapshot/my_fs_backup?pretty" -H 'Content-Type: application/json' -d'
{
"type": "fs",
"settings": {
"location": "PATH_TO_BACKUP_DIR", // Example: location" : "/usr/local/etc/elasticsearch/elastic-backup"
"compress": true
}
}'
Check your configuration: curl -X GET "localhost:9200/_snapshot/_all?pretty"
Restore from snapshot:
8.1 Get all snapshots: curl -X GET "localhost:9200/_snapshot/my_fs_backup/*?pretty
You will get this screen:
Pick the snapshot you want (In case you have more than one)
Use this command to restore:
curl -X POST "localhost:9200/_snapshot/BACKUP_NAME/SNAPSHOT_ID/_restore?pretty" -H 'Content-Type: application/json' -d'
{
"indices": "users, events",
"ignore_unavailable": true,
"include_global_state": true
}
For example:
curl -X POST "localhost:9200/_snapshot/my_fs_backup/elastic-snapshot-2020.09.05-lwul1zb9qaorq0k9vmd5rq/_restore?pretty" -H 'Content-Type: application/json' -d'
{
"indices": "users, events",
"ignore_unavailable": true,
"include_global_state": true
}
Pay attention that I imported only 2 indices users and events
Hope it helps 😃
More info and extended tutorials:
Elastic website, jee-appy blogspot
NOTE: This solution uses slightly different repository storage, but the behaviour is expected to be the same!
I know it's a zombie question, but I currently stumbled of this, while testing restore procedure of ElasticSnapshots with Azure Repository plugin.
I created a snapshot on our old PAAS Openstack and tried restoring on a fresh Azure Elastic cluster where I tested the connectivity of Azure repositories before. I still got the "repository location" in my case:
{
"type": "azure",
"settings": {
"container": "restore",
"chunk_size": "32MB",
"compress": true
}
}
But restoring always got the me the missing repository exception:
{
"error" : {
"root_cause" : [
{
"type" : "repository_missing_exception",
"reason" : "[restore] missing"
}
],
"type" : "repository_missing_exception",
"reason" : "[restore] missing"
},
"status" : 404
}
Turns out another branch got deployed on my test Azure k8s cluster which removed the Azure repository plugin and with it the connectivity to the repository. Even restoring the plugin did not help fixing the missing_repository_exception
Carefully re-reading the docs (https://www.elastic.co/guide/en/elasticsearch/reference/7.9/snapshots-register-repository.html) gave me this:
You can unregister a repository using the delete snapshot repository API.
When a repository is unregistered, Elasticsearch only removes the reference to the location where the repository is storing the snapshots. The snapshots themselves are left untouched and in place.
So what solved the missing_repository_exception in my case was doing a "bit scary":
DELETE /_snapshot/restore
and then recreating the snapshot location with:
PUT https://localhost:9200/_snapshot/restore --data '
{
"type": "azure",
"settings": {
"container": "restore",
"chunk_size": "32MB",
"compress": true
}
}'
Then the previously failing snapshot restore command succeded:
POST https://localhost:9200/_snapshot/restore/snapshot_2020810/_restore
{"accepted":true}
curl -XGET localhost:9200/_snapshot/old_backup/snapshot_to_ubuntu?pretty=1
That command creates snapshot. Because you didnt create a repository on ubuntu side, you get error.
What you want is to restore so you should use _restore endpoint:
POST /_snapshot/old_backup/snapshot_to_ubuntu/_restore
Check:https://www.elastic.co/guide/en/elasticsearch/reference/2.3/modules-snapshots.html#_restore