AWS::ECS::Cluster object does not support attribute CapacityProviders - troposphere

Troposphere module version: 2.6.2.
Python script:
import troposphere.ecs as ecs
from ecs import *
...
template.add_resource(ecs.Cluster(
"Cluster",
CapacityProviders=["FARGATE", "FARGATE_SPOT"]
))
...
Why am i getting this error when running the script that creates the template?
AWS::ECS::Cluster object does not support attribute CapacityProviders

I believe it is not supported in 2.6.2. Update to 2.6.3 should solve your problem

Related

Issue with OpenAI API key while using it in Windows

I have to fine-tune the OpenAI model on my custom dataset. I have created the dataset in jsonl format. I use the following commands on windows command line:
set OPENAI_API_KEY=<API key>
openai tools fine_tunes.prepare_data -f "train_data.jsonl"
The above commands run successfully and give me some suggestions for updating jsonl file. After this, I run the following command to fine-tune the 'curie' model.
openai api fine_tunes.create 'openai.api_key = <API key>' -t "train_data.jsonl" -m "curie"
But I am getting following issue:
←[91mError:←[0m Incorrect API key provided: "sk-iQJX*****************************************mux". You can find your API key at https://beta.openai.com. (HTTP status code: 401)
Can anybody help me out with this issue.
This is a common issue with an earlier version of the openai's CLI. If you haven't already, make sure you upgrade to the most recent version by doing
pip install --upgrade openai
One possible workaround is to just use a python script to do what you would normally do in the CLI.
# To train a model:
import os
import openai
os.environ["OPENAI_API_KEY"] = "sk-iQJX*****************************************mux"
os.system("openai api fine_tunes.create -t train_data.jsonl -m curie")
When assigning the API key in command line, dont use double quotes like:
API_KEY=ab-123123123123123231
This will solve the issue

AWS Lambda export Api Gatewat backup to S3

I am trying to configure an lambda function which will export Api backup to S3. But when i try to get an ordinary swagger backup through lambda using this script-
import boto3
client = boto3.client('apigateway')
def lambda_handler(event, context):
response = client.get_export(
restApiId='xtmeuujbycids',
stageName='test',
exportType='swagger',
parameters={
extensions: 'authorizers'
},
accepts='application/json'
)
I am getting this error-
[ERROR] NameError: name 'extensions' is not defined
Please help to resolve this issues.
Could you please check if the documentation has been explicitly published, and if it has been deployed to a stage before it available in the export.
The problem is in:
parameters={
extensions: 'authorizers'
}
You're passing a dictionary, which is ok, but the key should be a string. Since you don't have quotes around extensions, Python is trying to resolve it as a variable with the name extensions which doesn't exist in your code, and so it gives the NameError

No environment configuration found. DefaultAzureCredential()

I am trying to use this python sample to authenticate a client with an Azure Service
# pip install azure-identity
from azure.identity import DefaultAzureCredential
# pip install azure-mgmt-compute
from azure.mgmt.compute import ComputeManagementClient
# pip install azure-mgmt-network
from azure.mgmt.network import NetworkManagementClient
# pip install azure-mgmt-resource
from azure.mgmt.resource import ResourceManagementClient
SUBSCRIPTION_ID = creds_obj['SUBSCRIPTION_ID']
# Create client
# For other authentication approaches, please see: https://pypi.org/project/azure-identity/
resource_client = ResourceManagementClient(
credential=DefaultAzureCredential(),
subscription_id=SUBSCRIPTION_ID
)
network_client = NetworkManagementClient(
credential=DefaultAzureCredential(),
subscription_id=SUBSCRIPTION_ID
)
compute_client = ComputeManagementClient(
credential=DefaultAzureCredential(),
subscription_id=SUBSCRIPTION_ID
)
I keep getting No environment configuration found.
The code sample is directly from the microsoft github: https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/resources/azure-mgmt-resource/azure/mgmt/resource/resources/_resource_management_client.py. Ideally I would like to manage this configuration using environment variables or a config file. Is there any way to do this?
When using Azure Identity client library for Python, DefaultAzureCredential attempts to authenticate via the following mechanisms in this order, stopping when one succeeds:
You could set Environment Variables to fix it.
from azure.identity import DefaultAzureCredential
credential=DefaultAzureCredential()
Or set the properties in config and use ClientSecretCredential to create credential.
from azure.identity import ClientSecretCredential
subscription_id = creds_obj["AZURE_SUBSCRIPTION_ID"]
tenant_id = creds_obj["AZURE_TENANT_ID"]
client_id = creds_obj["AZURE_CLIENT_ID"]
client_secret = creds_obj["AZURE_CLIENT_SECRET"]
credential = ClientSecretCredential(tenant_id=tenant_id, client_id=client_id, client_secret=client_secret)
I was having somewhat similar trouble following this Azure key vault tutorial which brought me here.
The solution I found was overriding the default values in the DefaultAzureCredential() constructor.
https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python
For reasons people far smarter than me will be able to explain I found that even though I had credentials from the azure cli it was not using those and instead looking for environment_credentials, which I did not have. So it threw an exception.
Once I set the exclude_environment_credential argument to True it then looked instead for the managed_identity_credentials, again which I did not have.
Eventually when I force excluded all credentials other than those from the cli it worked ok for me
I hope this helps someone. Those with more experience please feel free to edit as you see fit

MICRONAUT_FUNCTION_NAME Environment variables is not working in AWS lambda

I want to write multiple function inside our app so instead of putting config in application.yml I use MICRONAUT_FUNCTION_NAME environment variable in AWS lambda but I keep receiving the error
No function found for name: xxx: java.lang.IllegalStateException
java.lang.IllegalStateException: No function found for name: xxx
at io.micronaut.function.executor.AbstractExecutor.lambda$resolveFunction$0(AbstractExecutor.java:60)
at java.util.Optional.orElseThrow(Optional.java:290)
at io.micronaut.function.executor.AbstractExecutor.resolveFunction(AbstractExecutor.java:60)
at io.micronaut.function.executor.StreamFunctionExecutor.execute(StreamFunctionExecutor.java:89)
at io.micronaut.function.aws.MicronautRequestStreamHandler.handleRequest(MicronautRequestStreamHandler.java:54)
Do anyone know what did I miss or it's not possible for multiple functions?
You can use io.micronaut:micronaut-function-aws:1.4.0 with micronaut version 1.3.3.
This happens because I use Micronaut version 1.3.3. If I downgrade to 1.2.11, it works perfectly.

Use Puppet Apache class to install Apache1 on CentOS

I'm trying to create a Vagrant setup using CentOS 6.4 and Apache 1.3 (this is for a legacy application). I am using Puppet (though if an answer in Chef is easier, I'd be happy to use it) and the Puppetlabs Apache class. The issue I'm having is that it installs Apache 2.2, but I don't see how to make it install Apache 1.3 instead.
What am I doing wrong and how can I do it right? (Answers of "Upgrade your app" will be downvoted - I don't have the authority to make that decision.)
The module you're using doesn't explicitly expose a parameter to specify which version of the httpd package you want to install.
Instead of using Puppetlabs module, you could use the Apache module from Alessandro Franceschi (source here - also on the forge)). If the package you need to install has a different name than httpd, the module exposes a package parameter which you can override like this:
class { 'apache':
package => 'apache13',
}
If, instead, Apache 1.3 is provided by the same httpd package by declaring the specific version you want, you can rely on the version parameter:
class { 'apache':
version => '1.3.39',
}
Clearly, you can also combine the two parameters together.
using those modules return the following error on Redhat:
Error 400 on SERVER: Illegal expression.
A Type-Name is unacceptable as function name in a Function Call at /etc/puppet/modules/apache/man.

Resources