How do you Download Bloomberg intraday data - download

How do you get intraday 1 minute data from Bloomberg please?
I want bid & ask for 5 futures saved as a data frame please.
Thanks.

I believe you are looking for the Intraday 1-minute bar data, Open, High, Low, Close, etc. You will need to use IntradayBarRequest to //blp/refdata service.
Revised answer:
Send two IntradayBarRequest's, one for BID and the other for ASK event type, with interval = 1 minute to //blp/refdata service. Extract the "Open" element from each data point in response message(s).
IntradayBarRequest = {
security = "IBM UN Equity"
eventType = BID
startDateTime = 2019-02-13T00:00:00.000
endDateTime = 2019-02-14T23:59:59.000
interval = 1
gapFillInitialBar = false
adjustmentNormal = false
adjustmentAbnormal = false
adjustmentSplit = false
adjustmentFollowDPDF = false
}
Sample data point:
barTickData = {
time = 2019-02-13T14:30:00.000
open = 136.45
high = 136.87
low = 136.25
close = 136.76
volume = 91
numEvents = 45
value = 12424.061
}
See the IntradayBarExample code example in the SDK.

You can use xbbg:
In [1]: from xbbg import blp
In [2]: blp.bdib(ticker='SPY US Equity', dt='2019-01-17').tail()
Out[2]:
ticker SPY US Equity
field open high low close volume num_trds
time
2019-01-17 15:57:00-05:00 262.82 262.92 262.70 262.88 644947 2744
2019-01-17 15:58:00-05:00 262.87 262.89 262.77 262.86 713451 3152
2019-01-17 15:59:00-05:00 262.87 263.05 262.74 263.00 2248033 5616
2019-01-17 16:09:00-05:00 262.96 262.96 262.96 262.96 0 1
2019-01-17 16:15:00-05:00 262.96 262.96 262.96 262.96 0 1

Dan, if you are looking to work with Pandas I would suggest using TIA: https://github.com/bpsmith/tia
You mentioned you are working with Python 3. At the moment, TIA is only compatible with Python 2, but here https://github.com/bpsmith/tia/issues/11 has a Python 3 conversion. I've been using this recently and it's pretty good. An example:
from tia.bbg import LocalTerminal
import tia.bbg.datamgr as dm
import datetime
sid = 'IBM US EQUITY'
event = 'TRADE'
dt = pd.datetools.BDay(-1).apply(pd.datetime.now())
start = pd.datetime.combine(dt, datetime.time(13, 30))
end = pd.datetime.combine(dt, datetime.time(21, 30))
f = LocalTerminal.get_intraday_bar(sid, event, start, end,
interval=60).as_frame()
f.head(1)
close high low numEvents open time value volume
0 162.2500 162.70 161.51 4005 162.4900 2015-02-24 14:30:00 110345672 680888
The github link above has loads of examples, too.

Related

Technical Analyis (MACD) for crpto trading

Background:
I have writing a crypto trading bot for fun and profit.
So far, it connects to an exchange and gets streaming price data.
I am using this price to create a technical indicator (MACD).
Generally for MACD, it is recommended to use closing prices for 26, 12 and 9 days.
However, for my trading strategy, I plan to use data for 26, 12 and 9 minutes.
Question:
I am getting multiple (say 10) price ticks in a minute.
Do I simply average them and round the time to the next minute (so they all fall in the same minute bucket)? Or is there is better way to handle this.
Many Thanks!
This is how I handled it. Streaming data comes in < 1s period. Code checks for new low and high during streaming period and builds the candle. Probably ugly since I'm not a trained developer, but it works.
Adjust "...round('20s')" and "if dur > 15:" for whatever candle period you want.
def on_message(self, msg):
df = pd.json_normalize(msg, record_prefix=msg['type'])
df['date'] = df['time']
df['price'] = df['price'].astype(float)
df['low'] = df['low'].astype(float)
for i in range(0, len(self.df)):
if i == (len(self.df) - 1):
self.rounded_time = self.df['date'][i]
self.rounded_time = pd.to_datetime(self.rounded_time).round('20s')
self.lhigh = self.df['price'][i]
self.lhighcandle = self.candle['high'][i]
self.llow = self.df['price'][i]
self.lowcandle = self.candle['low'][i]
self.close = self.df['price'][i]
if self.lhigh > self.lhighcandle:
nhigh = self.lhigh
else:
nhigh = self.lhighcandle
if self.llow < self.lowcandle:
nlow = self.llow
else:
nlow = self.lowcandle
newdata = pd.DataFrame.from_dict({
'date': self.df['date'],
'tkr': tkr,
'open': self.df.price.iloc[0],
'high': nhigh,
'low': nlow,
'close': self.close,
'vol': self.df['last_size']})
self.candle = self.candle.append(newdata, ignore_index=True).fillna(0)
if ctime > self.rounded_time:
closeit = True
self.en = time.time()
if closeit:
dur = (self.en - self.st)
if dur > 15:
self.st = time.time()
out = self.candle[-1:]
out.to_sql(tkr, cnx, if_exists='append')
dat = ['tkr', 0, 0, 100000, 0, 0]
self.candle = pd.DataFrame([dat], columns=['tkr', 'open', 'high', 'low', 'close', 'vol'])
As far as I know, most or all technical indicator formulas rely on same-sized bars to produce accurate and meaningful results. You'll have to do some data transformation. Here's an example of an aggregation technique that uses quantization to get all your bars into uniform sizes. It will convert small bar sizes to larger bar sizes; e.g. second to minute bars.
// C#, see link above for more info
quoteHistory
.OrderBy(x => x.Date)
.GroupBy(x => x.Date.RoundDown(newPeriod))
.Select(x => new Quote
{
Date = x.Key,
Open = x.First().Open,
High = x.Max(t => t.High),
Low = x.Min(t => t.Low),
Close = x.Last().Close,
Volume = x.Sum(t => t.Volume)
});
See Stock.Indicators for .NET for indicators and related tools.

status "Preparing" for more than 2 hours for 350MB file

I have submitted autoML run on remote compute (Standard_D12_v2 - 4 node cluster 28GB, 4 cores each)
My input file is roughly 350 MB.
the status is "Preparing" for more than 2 hours. And then it fails.
User error: Run timed out. No model completed training in the specified time. Possible solutions:
1) Please check if there are enough compute resources to run the experiment.
2) Increase experiment timeout when creating a run.
3) Subsample your dataset to decrease featurization/training time.
below is my python-Notebook code, please help.
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.core.compute import ComputeTarget
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
experiment=Experiment(ws, 'nyc-taxi')
cpu_cluster_name = "low-cluster"
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
data = "https://betaml4543906917.blob.core.windows.net/betadata/2015_08.csv"
dataset = Dataset.Tabular.from_delimited_files(data)
training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)
label_column_name = 'totalAmount'
automl_settings = {
"n_cross_validations": 3,
"primary_metric": 'normalized_root_mean_squared_error',
"enable_early_stopping": True,
"max_concurrent_iterations": 2, # This is a limit for testing purpose, please increase it as per cluster size
"experiment_timeout_hours": 2, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ablity to find the best model possible
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'regression',
debug_log = 'automl_errors.log',
compute_target = compute_target,
training_data = training_data,
label_column_name = label_column_name,
**automl_settings
)
remote_run = experiment.submit(automl_config, show_output = False)

How to have multiple publishers and subscribers in ZMQ/0MQ?

How to create a network which allows for multiple publishers and multiple subscribers to those publishers?
Or is it absolutely necessary for a message broker to be used?
import time
import zmq
from multiprocessing import Process
def bind_pub(sleep_seconds, max_messages, pub_id):
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:5556")
message = 0
while True:
socket.send_string("1 sending_func=bind_pub message_number=%s pub_id=%s" % (message, pub_id))
message += 1
if message >= max_messages:
break
time.sleep(sleep_seconds)
def bind_sub(sleep_seconds, max_messages, sub_id):
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.bind("tcp://*:5556")
socket.setsockopt_string(zmq.SUBSCRIBE, '1')
message_n = 0
while True:
message = socket.recv_string()
print(message + " receiving_func=bind_sub sub_id=%s" % sub_id)
message_n += 1
if message_n >= max_messages - 1:
break
time.sleep(sleep_seconds)
def conect_pub(sleep_seconds, max_messages, pub_id):
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect("tcp://localhost:5556")
message = 0
while True:
socket.send_string("1 sending_func=conect_pub message_number=%s pub_id=%s" % (message, pub_id))
message += 1
if message >= max_messages:
break
time.sleep(sleep_seconds)
def connect_sub(sleep_seconds, max_messages, sub_id):
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
socket.setsockopt_string(zmq.SUBSCRIBE, '1')
message_n = 0
while True:
message = socket.recv_string()
print(message + " receiving_func=connect_sub sub_id=%s" % sub_id)
message_n += 1
if message_n >= max_messages - 1:
break
time.sleep(sleep_seconds)
When trying a bind_pub, connect_pub, connect_sub, connect_sub network architecture:
# bind_pub, connect_pub, connect_sub, connect_sub
n_messages = 4
p1 = Process(target=bind_pub, args=(1,n_messages,1))
p2 = Process(target=conect_pub, args=(1,n_messages,2))
p3 = Process(target=connect_sub, args=(0.1,n_messages,1))
p4 = Process(target=connect_sub, args=(0.1,n_messages,2))
p1.start()
p2.start()
p3.start()
p4.start()
p1.join()
p2.join()
p3.join()
p4.join()
Results in pub_id=2 messages going missing:
1 sending_func=bind_pub message_number=1 pub_id=1 receiving_func=connect_sub sub_id=2
1 sending_func=bind_pub message_number=1 pub_id=1 receiving_func=connect_sub sub_id=1
1 sending_func=bind_pub message_number=2 pub_id=1 receiving_func=connect_sub sub_id=2
1 sending_func=bind_pub message_number=2 pub_id=1 receiving_func=connect_sub sub_id=1
1 sending_func=bind_pub message_number=3 pub_id=1 receiving_func=connect_sub sub_id=1
1 sending_func=bind_pub message_number=3 pub_id=1 receiving_func=connect_sub sub_id=2
Similarly running a connect_pub, connect_pub, connect_sub, bind_sub architecture:
# connect_pub, connect_pub, connect_sub, bind_sub
n_messages = 4
p1 = Process(target=conect_pub, args=(1,n_messages,1))
p2 = Process(target=conect_pub, args=(1,n_messages,2))
p3 = Process(target=bind_sub, args=(0.1,n_messages,1))
p4 = Process(target=connect_sub, args=(0.1,n_messages,2))
p1.start()
p2.start()
p3.start()
p4.start()
p1.join()
p2.join()
p3.join()
p4.join()
Results in no messages being received by sub_id=2:
1 sending_func=conect_pub message_number=1 pub_id=1 receiving_func=bind_sub sub_id=1
1 sending_func=conect_pub message_number=1 pub_id=2 receiving_func=bind_sub sub_id=1
1 sending_func=conect_pub message_number=2 pub_id=1 receiving_func=bind_sub sub_id=1
Well,fair to mention that ZeroMQ is principally a Broker-less framework ,
this means the 2nd question is solved a priori - no, it is not only not absolutely necessary, it is also principally impossible ( if one does not implement a Broker-(semi-)persistence as a Zen-of-Zero standard ZeroMQ tools based layer an extra add-on ).
Next, ZeroMQ tools are by far not "socket"-s as you know 'em :
This is an often re-articulated misconception, so let me repeat it in bold.
Beware:
ZeroMQ Socket()-instance is not a tcp-socket-as-you-know-it. Best read about the main conceptual differences in ZeroMQ hierarchy in less than a five seconds or other posts and discussions here.
Yet,more important,there seems to be no expressed need which is not covered :
ZeroMQ can either serve all of :
many-PUB-s : many-SUB-s -or-
one-PUB : many-SUB-s -or- even
many-PUB-s : one-SUB
where all or part of those "many" could still get .connect()-ed to a single or more AccessPoints, so the produced topologies could go indeed wild ( for details kindly check the above offered link to a "five seconds" read ) so, one's own imagination seems to be the only ceiling in doing this.
For performance and latency envelopes, feel free to seek and read more in other posts.
It is certainly not necessary to use a broker in order to implement a many-to-many network, but a broker does simplify configuration since each node only needs to know the broker's address, not all of its peers.
Another possibility is a hybrid approach -- using a broker to exchange address information among peers so they can connect to each other directly. You can find an example here: https://github.com/nyfix/OZ/blob/master/doc/Naming-Service.md

Visual Studio 2013 SerialPort not receiving all data

I am currently working on a project that requires serial communication between PIC 24FV16KA302 and a PC software.
I have searched the Internet for the past 3 days and i cant find an answer for my problem so i decided to ask here . This is my first visual studio program so i dont have any experience with the software.
The PIC has few variables and two 8x16 tables that i need to view and modify on the PC side . The problem comes when i send the tables , all other information is received without a problem . I am using serial connection ( 38400/8-N-1 ) via uart to usb converter
FT232
When the PC send "AT+RTPM" to the PIC .
Button7.Click
SerialPort1.ReceivedBytesThreshold = 128
MachineState = MS.Receive_table
SerialPort1.Write("AT+RTPM")
End Sub
The PIC sends back 128 Bytes( the values in the table )
case read_table_pwm : // send pwm table
for (yy = 0 ; yy < 8 ; yy ++) {
for (xx = 0 ; xx < 16 ; xx++ ) {
uart_send_char(controll_by_pmw_map_lb[yy][xx]) ;
}
}
at_command = receive_state_idle ;
break ;
Which the software is suppose to get and display in a DataGrid.
Private Sub SerialPort1_DataReceived(sender As Object, e As IO.Ports.SerialDataReceivedEventArgs) Handles SerialPort1.DataReceived
If MachineState = MS.Receive_table Then
SerialPort1.Read(Buffer_array_received_data, 0, 128)
cellpos = 0
For grid_y As Int16 = 0 To 7 Step 1
For grid_x As Int16 = 0 To 15 Step 1
DataGridView1.Rows(grid_y).Cells(grid_x).Value = Buffer_array_received_data(cellpos)
cellpos += 1
Next
Next
End Sub
The problem is that most of the time ( 99 % ) it displays only part of the dataset and zeros to the end , and when i try to do it again it display the other part and it starts from the beginning .
First request
Second request
If i try the same thing with another program i always get the full dataset
Realterm
Termite
I have tried doing it cell by cell , but it only works if i request them one very second , other wise i get the same problem .
After that i need to use a timer ( 100 ms ) to request live data from the PIC .
this work better but still some of the time i get some random data. I haven't focused on that for the moment because without the dataset everything else is useless .
Am i missing something or has anyone encountered the same problem ?
I managed to solve the problem by replacing
SerialPort1.Read(Buffer_array_received_data, 0, 128)
with
For byte_pos As Int16 = 0 To 127 Step 1
Buffer_array_received_data(byte_pos) = SerialPort1.ReadByte()
Next
But aren't they supposed to be the same ?

Pyaudio : how to check volume

I'm currently developping a VOIP tool in python working as a client-server. My problem is that i'm currently sending the Pyaudio input stream as follows even when there is no sound (well, when nobody talks or there is no noise, data is sent as well) :
CHUNK = 1024
p = pyaudio.PyAudio()
stream = p.open(format = pyaudio.paInt16,
channels = 1,
rate = 44100,
input = True,
frames_per_buffer = CHUNK)
while 1:
self.conn.sendVoice(stream.read(CHUNK))
I would like to check volume to get something like this :
data = stream.read(CHUNK)
if data.volume > 20%:
self.conn.sendVoice(data)
This way I could avoid sending useless data and spare connection/ increase performance. (Also, I'm looking for some kind of compression but I think I will have to ask it in another topic).
Its can be done using root mean square (RMS).
One way to build your own rms function using python is:
def rms( data ):
count = len(data)/2
format = "%dh"%(count)
shorts = struct.unpack( format, data )
sum_squares = 0.0
for sample in shorts:
n = sample * (1.0/32768)
sum_squares += n*n
return math.sqrt( sum_squares / count )
Another choice is use audioop to find rms:
data = stream.read(CHUNK)
rms = audioop.rms(data,2)
Now if do you want you can convert rms to decibel scale decibel = 20 * log10(rms)

Resources