Erlang Heroku RabbitMQ - heroku

i run in terminal command
heroku config:get RABBITMQ_BIGWIG_RX_URL --app app1
give me a string
amqp://Ajwj23X3:nsi3sC#leaping-charlock-1.bigwig.lshift.net:18372/Hbau2x3d
I copy login,password,url,port to erlang code
-record(amqp_params_network, {username = <<"Ajwj23X3">>,
password = <<"nsi3sC">>,
virtual_host = <<"/">>,
host = "leaping-charlock-1.bigwig.lshift.net",
port = 18372,
channel_max = 0,
frame_max = 0,
heartbeat = 0,
connection_timeout = infinity,
ssl_options = none,
auth_mechanisms =
[fun amqp_auth_mechanisms:plain/3,
fun amqp_auth_mechanisms:amqplain/3],
client_properties = [],
socket_options = []}).
But when i run program and connection false
How correctly write amqp_params_network in erlang for heroku rabbitqm?

-record is the definition of the record type, and the values contained there are just defaults. It would be rather unusual to hard-code connection parameters as defaults for the record, even more so when the record definition is provided by an external library.
Instead, construct a record instance with the required data:
Params = #amqp_params_network{username = <<"Ajwj23X3">>,
password = <<"nsi3sC">>,
host = "leaping-charlock-1.bigwig.lshift.net",
port = 18372},
and use that instance when connecting:
{ok, Connection} = amqp_connection:start(Params),

Related

Creating cross region autonomous database failing with 'message': "The following tag namespaces / keys are not authorized or not found: 'oracle-tags'"

Need help in creating cross region standby database via python have tried creating with
oci.database.models.CreateCrossRegionAutonomousDatabaseDataGuardDetails
I am unable to find an example for the same so i tried with whatever i can find through sdk documentation
response = oci_client.get_autonomous_database(autonomous_database_id=primary_db_id)
primary_db_details = response.data
def create_cross_region_standby_db(db_client, primary_db_details: oci.database.models.AutonomousDatabase):
adw_request = oci.database.models.CreateCrossRegionAutonomousDatabaseDataGuardDetails()
adw_request.compartment_id = primary_db_details.compartment_id
adw_request.db_name = primary_db_details.db_name
adw_request.data_storage_size_in_tbs = primary_db_details.data_storage_size_in_tbs
adw_request.data_storage_size_in_gbs = primary_db_details.data_storage_size_in_gbs
adw_request.cpu_core_count = primary_db_details.cpu_core_count
adw_request.db_version = primary_db_details.db_version
adw_request.db_workload = primary_db_details.db_workload
adw_request.license_model = primary_db_details.license_model
adw_request.is_mtls_connection_required = primary_db_details.is_mtls_connection_required
adw_request.is_auto_scaling_enabled = primary_db_details.is_auto_scaling_enabled
adw_request.source_id = primary_db_details.id
adw_request.subnet_id = <standby subnet id>
adw_response = db_client.create_autonomous_database(create_autonomous_database_details=adw_request)
print(adw_response.data)
adw_id = adw_response.data.id
oci.wait_until(db_client, db_client.get_autonomous_database(adw_id), 'lifecycle_state', 'AVAILABLE')
print("Created ADW {}".format(adw_id))
return adw_id
create_cross_region_standby_db is done using standby region credentials. Creation of primary db in the same region works fine.

Using variables in cx_Oracle.connect method

I'm trying to read env.properties file to get the credentials to connect to Oracle DB using cx_Oracle.
Variables have correct values but I guess issue is with the way I'm using connect method.
Is there something wrong?
config = ConfigParser.RawConfigParser()
config.read('env.properties')
user = config.get('DatabaseSection', 'database.user')
pwd = config.get('DatabaseSection', 'database.password')
host = config.get('DatabaseSection', 'database.db_host')
port = config.get('DatabaseSection', 'database.db_port')
instance = config.get('DatabaseSection', 'database.db_instance')
conn = cx_Oracle.connect(user,pwd,host:port/instance)
cur = conn.cursor()
Managed with below piece of code -
dsn_tns = cx_Oracle.makedsn(host, port, instance)
conn = cx_Oracle.connect(user=user, password=pwd, dsn=dsn_tns)

How to configure Flume to listen a web api http petitions

I have built an api web application, which is published on IIS Server, I am trying to configure Apache Flume to listen that web api and to save the response of http petitions in HDFS, this is the post method that I need to listen:
[HttpPost]
public IEnumerable<Data> obtenerValores(arguments arg)
{
Random rdm = new Random();
int ano = arg.ano;
int rdmInt;
decimal rdmDecimal;
int anoActual = DateTime.Now.Year;
int mesActual = DateTime.Now.Month;
List<Data> ano_mes_sales = new List<Data>();
while (ano <= anoActual)
{
int mes = 1;
while ((anoActual == ano && mes <= mesActual) || (ano < anoActual && mes <= 12))
{
rdmInt = rdm.Next();
rdmDecimal = (decimal)rdm.NextDouble();
Data anoMesSales = new Data(ano, mes,(rdmInt * rdmDecimal));
ano_mes_sales.Add(anoMesSales);
mes++;
}
ano++;
}
return ano_mes_sales;
}
Flume is running over a VMware Virtual Machine CentOs, this is my attempt to configure flume to listen that application:
# Sources, channels, and sinks are defined per # agent name, in this case 'tier1'.
a1.sources = source1
a1.channels = channel1
a1.sinks = sink1
a1.sources.source1.interceptors = i1 i2
a1.sources.source1.interceptors.i1.type = host
a1.sources.source1.interceptors.i1.preserveExisting = false
a1.sources.source1.interceptors.i1.hostHeader = host
a1.sources.source1.interceptors.i2.type = timestamp
# For each source, channel, and sink, set # standard properties.
a1.sources.source1.type = org.apache.flume.source.http.HTTPSource
a1.sources.source1.bind = transacciones.misionempresarial.com/CSharpFlume
a1.sources.source1.port = 80
# JSONHandler is the default for the httpsource #
a1.sources.source1.handler = org.apache.flume.source.http.JSONHandler
a1.sources.source1.channels = channel1
a1.channels.channel1.type = memory
a1.sinks.sink1.type = hdfs
a1.sinks.sink1.hdfs.path = /monthSales
a1.sinks.sink1.hdfs.filePrefix = event-file-prefix-
a1.sinks.sink1.hdfs.round = false
a1.sinks.sink1.channel = channel1
# Other properties are specific to each type of # source, channel, or sink. In this case, we # specify the capacity of the memory channel.
a1.channels.channel1.capacity = 1000
I am using curl to post, here is my attempt:
curl -X POST -H 'Content-Type: application/json; charset=UTF-8' -d '[{"ano":"2010"}]' http://transacciones.misionempresarial.com/CSharpFlume/api/SourceFlume/ObtenerValores
I only get this error:
{"Message":"Error."}
My question are, which is the right way to configure flume to listen http petitions to my web api, what I am missing?
The standard Flume 'HTTPSource', and its default JSONHandler, will only process an event in a specific, Flume-centric format.
That format is documented in the user manual, and also in the comments at the beginning of the JSONHandler source code.
In summary, it expects to receive a list of JSON objects, each one containing headers (key/value pairs, mapped to the Flume Event headers) and body (a simple string, mapped to the Flume Event body).
To take your example, if you send:
[{"headers": {}, "body": "{\"ano\":\"2010\"}"}]
I think you'd get what you were looking for.
If you don't have the flexibility to change what you send, then you may be able to use org.apache.flume.source.http.BLOBHandler, depending upon what processing you are trying to do (NB. there's no documentation in the manual for this, only for org.apache.flume.sink.solr.morphline.BlobHandler - they are not the same thing, but there are some notes in FLUME-2718), or you may need to provide your own implementation of Flume's HTTPSourceHandler interface instead.
Side note: the HTTP Source bind option requires a hostname or IP address. You may just be being lucky with your value being treated as the hostname, and the path being ignored.

Error while configuring EMS with Database in Fault Tolerant mode

I am trying to setup my EMS in FT Mode, I have configured all the parameters in the 2 EMS config files.
But Im getting the warning:
Unable to initialize fault tolerant connection, remote server returned 'invalid user name'
Servername and password are exactly the same in both config files,so I don't know where the error is.
I am attaching the EMS config files that i am using for the EMS servers:
tibemsd.conf:
authorization = enabled
password =
server=EMS-HakanLAL
listen=tcp://7222
Ft_active=tcp://8222
users = users.conf
groups = groups.conf
topics = topics.conf
queues = queues.conf
acl_list = acl.conf
factories = factories.conf
routes = routes.conf
bridges = bridges.conf
transports = transports.conf
tibrvcm = tibrvcm.conf
durables = durables.conf
channels = channels.conf
stores = stores.conf
store = "C:/temp"
tibemsdft.conf:
authorization = enabled
password =
server=EMS-HakanLAL
listen=tcp://8222
Ft_active=tcp://7222
users = C:\Tibco\ems\8.1\BackUp\users.conf
groups = C:\Tibco\ems\8.1\BackUp\groups.conf
topics = C:\Tibco\ems\8.1\BackUp\topics.conf
queues = C:\Tibco\ems\8.1\BackUp\queues.conf
acl_list = C:\Tibco\ems\8.1\BackUp\acl.conf
factories = C:\Tibco\ems\8.1\BackUp\factories.conf
routes = C:\Tibco\ems\8.1\BackUp\routes.conf
bridges = C:\Tibco\ems\8.1\BackUp\bridges.conf
transports = C:\Tibco\ems\8.1\BackUp\transports.conf
tibrvcm = C:\Tibco\ems\8.1\BackUp\tibrvcm.conf
durables = C:\Tibco\ems\8.1\BackUp\durables.conf
channels = C:\Tibco\ems\8.1\BackUp\channels.conf
stores = C:\Tibco\ems\8.1\BackUp\stores.conf
store = "C:\ProgramData\TIBCO3\tibco\cfgmgmt\ems\data"
your tibemsd.conf and tibemsdft.conf looks fine. What you are probably missing is registering the server-name as a user within the users.conf.
If you make that entry, both servers should be able to connect to each other.

session error when using multiple uwsgi worker and beaker session.typ is memory

i'm running a pyramid webapp, using velruse to make OAuth. if running the app alone, it succeeded.
but if running with uwsgi multiple and set session.type = memory.
request.session will not contain necessary token info when callback from oauth.
production.ini:
session.type = memory
session.data_dir = %(here)s/data/sessions/data
session.lock_dir = %(here)s/data/sessions/lock
session.key = mykey
session.secret = mysecret
[uwsgi]
socket = 127.0.0.1:6543
master = true
workers = 8
max-requests = 65536
debug = false
autoload = true
virtualenv = /home/myname/my_env
pidfile = ./uwsgi.pid
daemonize = ./mypyramid-uwsgi.log
If you use memory as session store only the worker in which the session data has been written will be able to use that infos. You should use another sessione store (that can be shared by all of the workers/processes)
your uWSGI config is not clear (it looks like it only contains the socket option). Can you re-paste it ?

Resources