ENVIRONMENT:
AWX version: 21.4.0 AWX
install method: operator 0.26.0 AWX
deployment target: k3s + ( awx-on-k3s )
Operating System: redhat
enterprise linux 8
Web Browser: google chrome
Hello team, I am configuring notifications and I have the following error.
If anyone can help I would be grateful!!
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/tasks/system.py", line 290, in send_notifications
sent = notification.notification_template.send(notification.subject, notification.body)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/models/notifications.py", line 185, in send
return backend_obj.send_messages([notification_obj])
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/notifications/webhook_backend.py", line 81, in send_messages
raise Exception(smart_str(_("Error sending notification webhook: {}").format(r.status_code)))
Exception: Error sending notification webhook: 400```
Related
I’ve rebooted my system and then run all my containers using the vendor/bin/sail up command, the only one that failed to reload was MySQL. The error is the following :
ERROR: for mysql a bytes-like object is required, not 'str'
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 261, in _raise_for_status
response.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.25/containers/afdd1cbf7f45d9b20612bca
f73eef1b0bc1dd631bc6aa3dcfbf630c64e8a3662/start
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/compose/service.py", line 625, in start_container
container.start()
File "/usr/lib/python3/dist-packages/compose/container.py", line 241, in start
return self.client.start(self.id, **options)
File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/usr/lib/python3/dist-packages/docker/api/container.py", line 1095, in start
self._raise_for_status(res)
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/usr/lib/python3/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("b'Ports are not available: listen tcp 0.0.0.0:3306: bind: An attempt was made
to access a socket in a way forbidden by its access permissions.'")
I’m running this container on ubuntu server 20.04.
It might fix the problem if you provide an absolute path to your nginx/mysql conf file. I haven't tried the solution yet.
I am trying to run AWS command from my local MAC, but the connection keeps timing out and traceroute is unable to get to my s3.us-east.amazonaws.com.
I have run aws configure, on both my local mac and my ec2. It works on ec2 (not surprising), but not on my local MAC.
I have a single user who has sysadmin access.
As I said, AWS works on my ec2 instance and the following command yields the following.
Is there something else I need to do to get the AWS CLI to connect from my MAC?
[root#ip-172-31-26-40 ec2-user]# aws s3 ls
2019-11-19 19:55:14 wildrydes.denis.putnam
[root#ip-172-31-26-40 ec2-user]# aws s3api list-buckets
{
"Owner": {
"DisplayName": "denisputnam",
"ID": "22873dab63c6750106aa2bf9f5584754d9b5449067a07c5ab57841967022f3fc"
},
"Buckets": [
{
"CreationDate": "2019-11-19T19:55:14.000Z",
"Name": "wildrydes.denis.putnam"
}
]
}
[root#ip-172-31-26-40 ec2-user]#
Debug output:
Traceback (most recent call last):
File "site-packages/botocore/endpoint.py", line 200, in _do_get_response
File "site-packages/botocore/endpoint.py", line 244, in _send
File "site-packages/botocore/httpsession.py", line 287, in send
botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://iam.us-east.amazonaws.com/"
2019-12-04 17:20:51,304 - MainThread - botocore.hooks - DEBUG - Event needs-retry.iam.ListUsers: calling handler <botocore.retryhandler.RetryHandler object at 0x7ff818983250>
2019-12-04 17:20:51,304 - MainThread - botocore.retryhandler - DEBUG - retry needed, retryable exception caught: Connect timeout on endpoint URL: "https://iam.us-east.amazonaws.com/"
Traceback (most recent call last):
File "site-packages/urllib3/connection.py", line 157, in _new_conn
File "site-packages/urllib3/util/connection.py", line 84, in create_connection
File "site-packages/urllib3/util/connection.py", line 74, in create_connection
socket.timeout: timed out
This might have been answered here:
AWS S3 CLI - Could not connect to the endpoint URL
Essentially, perhaps your config file contains "us-east" instead of "us-east-1"
(The IAM timeout is trying to hit iam.us-east....But I dont think us-east without the 1 is an official region.)
At the moment I'm trying to create a cluster in aws ec2 with Graphlab Create. The code is as follows:
import graphlab as gl
ec2config = gl.deploy.Ec2Config(region='us-west-2', instance_type='m3.large',
aws_access_key_id='secret-acces-key-id',
aws_secret_access_key='secret-access-key')
ec2 = gl.deploy.ec2_cluster.create(name='Test Cluster',
s3_path='s3://test-big-data-2016', ec2_config=ec2config, idle_shutdown_timeout=3600, num_hosts=1)
When the above code is executed I get the following error:
Traceback (most recent call last):
File "test.py", line 59, in
ec2 = gl.deploy.ec2_cluster.create(name='Test Cluster', s3_path='s3://test-big-data-2016', ec2_config=ec2config, idle_shutdown_timeout=36000, num_hosts=1)
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/ec2_cluster.py", line 83, in create
cluster.start()
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/ec2_cluster.py", line 233, in start
self.idle_shutdown_timeout
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/_executionenvironment.py", line 372, in _start_commander_host
raise RuntimeError('Unable to start host(s). Please terminate '
RuntimeError: Unable to start host(s). Please terminate manually from the AWS console.
When I look in EC2 Management Console a new instance is launched and running. But still getting the error in the terminal.
I really don't know what I'm doing wrong here. I followed the exact instructions from: https://turi.com/learn/userguide/deployment/pipeline-example.html
I have a Flask app running on nginx + uWSGI.
On my local server (non-nginx), I get a nice stack trace + error reporting for exceptions.
Like this:
$ python run.py
Traceback (most recent call last):
File "run.py", line 1, in <module>
from myappname import app
File "/home/me/myappname/myappname/__init__.py", line 27, in <module>
file_handler.setLevel(logging.debug)
File "/usr/lib/python2.7/logging/__init__.py", line 710, in setLevel
self.level = _checkLevel(level)
File "/usr/lib/python2.7/logging/__init__.py", line 190, in _checkLevel
raise TypeError("Level not an integer or a valid string: %r" % level)
On nginx, there is next to no logging whatsoever (in /var/log/nginx/error.log).
This post suggests adding app.logger.exception('Failed') to my script, which didn't help.
How do I enable this sort of logging for debugging purposes?
Nginx will capture your app's console output, but you must make the app recover from exceptions. Else, you'll only get 500 or 400 errors from Nginx.
Try running the app off Nginx until it seems stable.
Use the logging module to capture app status information to your own log file. This strategy will be useful in the long run.
I'm starting out to learn about AMQP and RabbitMQ.
To get myself going I have used a CLI tool, rabbitmqadmin, to successfully publish data to a RabbitMQ development install I have created upon my Mac OS X box. So far so good, I can publish messages, and watch them dequeue...
However when I come to try the exact same functionality upon the Heroku / CloudAMQP instance the rabbitmqadmin client seems to fall over.
This is the call:
rabbitmqadmin --host lemur.cloudamqp.com --vhost app4444444_heroku.com --user app4444444_heroku.com --password <withheld> publish routing_key=test payload="hello"
...and this is the output:
Traceback (most recent call last):
File "/usr/local/bin/rabbitmqadmin", line 828, in <module>
main()
File "/usr/local/bin/rabbitmqadmin", line 325, in main
method()
File "/usr/local/bin/rabbitmqadmin", line 428, in invoke_get
result = self.post(uri, json.dumps(upload))
File "/usr/local/bin/rabbitmqadmin", line 354, in post
return self.http("POST", path, body)
File "/usr/local/bin/rabbitmqadmin", line 377, in http
resp = conn.getresponse()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1013, in getresponse
response.begin()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 402, in begin
version, status, reason = self._read_status()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 366, in _read_status
raise BadStatusLine(line)
httplib.BadStatusLine: ''
Any thoughts or ideas gratefully received!
Add --ssl to the command line. CloudAMQP's web ui is https only.