Tensorflow dequeue is very slow on Cloud ML - performance

I am trying to run a CNN on the cloud (Google Cloud ML) because my laptop does not have a GPU card.
So I uploaded my data on Google Cloud Storage. A .csv file with 1500 entries, like so:
| label | img_path |
| label_1| /img_1.jpg |
| label_2| /img_2.jpg |
and the corresponding 1500 jpgs.
My input_fn looks like so:
def input_fn(filename,
batch_size,
num_epochs=None,
skip_header_lines=1,
shuffle=False):
filename_queue = tf.train.string_input_producer(filename, num_epochs=num_epochs)
reader = tf.TextLineReader(skip_header_lines=skip_header_lines)
_, row = reader.read(filename_queue)
row = parse_csv(row)
pt = row.pop(-1)
pth = filename.rpartition('/')[0] + pt
img = tf.image.decode_jpeg(tf.read_file(tf.squeeze(pth)), 1)
img = tf.to_float(img) / 255.
img = tf.reshape(img, [IMG_SIZE, IMG_SIZE, 1])
row = tf.concat(row, 0)
if shuffle:
return tf.train.shuffle_batch(
[img, row],
batch_size,
capacity=2000,
min_after_dequeue=2 * batch_size + 1,
num_threads=multiprocessing.cpu_count(),
)
else:
return tf.train.batch([img, row],
batch_size,
allow_smaller_final_batch=True,
num_threads=multiprocessing.cpu_count())
Here is what the full graph looks like (very simple CNN indeed):
Running the training with a batch size of 200, then most of the compute time on my laptop (on my laptop, the data is stored locally) is spent on the gradients node which is what I would expect. The batch node has a compute time of ~12ms.
When I run it on the cloud (scale-tier is BASIC), the batch node takes more than 20s. And the bottleneck seems to be coming from the QueueDequeueUpToV2 subnode according to tensorboard:
Anyone has any clue why this happens? I am pretty sure I am getting something wrong here, so I'd be happy to learn.
Few remarks:
-Changing between batch/shuffle_batch with different min_after_dequeue does not affect.
-When using BASIC_GPU, the batch node is also on the CPU which is normal according to what I read and it takes roughly 13s.
-Adding a time.sleep after queues are started to ensure no starvation also has no effect.
-Compute time is indeed linear in batch_size, so with a batch_size of 50, the compute time would be 4 times smaller than with a batch_size of 200.
Thanks for reading and would be happy to give more details if anyone needs.
Best,
Al
Update:
-Cloud ML instance and Buckets were not in the same region, making them in the same region improved result 4x.
-Creating a .tfrecords file made the batching take 70ms which seems to be acceptable. I used this blog post as a starting point to learn about it, I recommend it.
I hope this will help others to create a fast data input pipeline!

Try converting your images to tfrecord format and read them directly from graph. The way you are doing it, there is no possibility of caching and if your images are small, you are not taking advantage of the high sustained reads from cloud storage. Saving all your jpg images into a tfrecord file or small number of files will help.
Also, make sure your bucket is a single region bucket in a region that had gpus and that you are submitting to cloudml in that region.

I've got the similar problem before. I solved it by changing tf.train.batch() to tf.train.batch_join(). In my experiment, with 64 batch size and 4 GPUs, it took 22 mins by using tf.train.batch() whilst it only took 2 mins by using tf.train.batch_join().
In Tensorflow doc:
If you need more parallelism or shuffling of examples between files, use multiple reader instances using the tf.train.shuffle_batch_join
https://www.tensorflow.org/programmers_guide/reading_data

Related

What is good way of using multiprocessing for bifacial_radiance simulations?

For a university project I am using bifacial_radiance v0.4.0 to run simulations of approx. 270 000 rows of data in an EWP file.
I have set up a scene with some panels in a module following a tutorial on the bifacial_radiance GitHub page.
I am running the python script for this on a high power computer with 64 cores. Since python natively only uses 1 processor I want to use multiprocessing, which is currently working. However it does not seem very fast, even when starting 64 processes it uses roughly 10 % of the CPU's capacity (according to the task manager).
The script will first create the scene with panels.
Then it will look at a result file (where I store results as csv), and compare it to the contents of the radObj.metdata object. Both metdata and my result file use dates, so all dates which exist in the metdata file but not in the result file are stored in a queue object from the multiprocessing package. I also initialize a result queue.
I want to send a lot of the work to other processors.
To do this I have written two function:
A file writer function which every 10 seconds gets all items from the result queue and writes them to the result file. This function is running in a single multiprocessing.Process process like so:
fileWriteProcess = Process(target=fileWriter,args=(resultQueue,resultFileName)).start()
A ray trace function with a unique ID which does the following:
Get an index ìdx from the index queue (described above)
Use this index in radObj.gendaylit(idx)
Create the octfile. For this I have modified the name which the octfile is saved with to use a prefix which is the name of the process. This is to avoid all the processes using the same octfile on the SSD. octfile = radObj.makeOct(prefix=name)
Run an analysis analysis = bifacial_radiance.AnalysisObj(octfile,radObj.basename)
frontscan, backscan = analysis.moduleAnalysis(scene)
frontDict, backDict = analysis.analysis(octfile, radObj.basename, frontscan, backscan)
Read the desired results from resultDict and put them in the resultQueue as a single line of comma-separated values.
This all works. The processes are running after being created in a for loop.
This speeds up the whole simulation process quite a bit (10 days down to 1½ day), but as said earlier the CPU is running at around 10 % capacity and the GPU is running around 25 % capacity. The computer has 512 GB ram which is not an issue. The only communication with the processes is through the resultQueue and indexQueue, which should not bottleneck the program. I can see that it is not synchronizing as the results are written slightly unsorted while the input EPW file is sorted.
My question is if there is a better way to do this, which might make it run faster? I can see in the source code that a boolean "hpc" is used to initiate some of the classes, and a comment in the code mentions that it is for multiprocessing, but I can't find any information about it elsewhere.

R307 Fingerprint Sensor working with more then 1000 fingerprints

I want to integrate fingerprint sensor in my project. For the instance I have shortlisted R307, which has capacity of 1000 fingerprints.But as project requirement is more then 1000 prints,so I will going to store print inside the host.
The procedure I understand by reading the datasheet for achieving project requirements is :
I will register the fingerprint by "GenImg".
I will download the template by "upchr"
Now whenever a fingerprint come I will follow the step 1 and step 2.
Then start some sort of matching algorithm that will match the recently downloaded template file with
the template file stored in database.
So below are the points for which I want your thoughts
Is the procedure I have written above is correct and optimized ?
Is matching algorithm is straight forward like just comparing or it is some sort of tricky ? How can
I implement that.Please suggest if some sort of library already exist.
The sensor stores the image in 256 * 288 pixels and if I take this file to host
at maximum data rate it takes ~5(256 * 288*8/115200) seconds. which seems very
large.
Thanks
Abhishek
PS: I just mentioned "HOST" from which I going to connect sensor, it can be Arduino/Pi or any other compute device depends on how much computing require for this task, I will select.
You most probably figured it out yourself. But for anyone stumbling here in future.
You're correct for the most part.
You will take finger image (GenImg)
You will then generate a character file (Img2Tz) at BufferID: 1
You'll repeat the above 2 steps again, but this time store the character file in BufferID: 2
You're now supposed to generate a template file by combining those 2 character files (RegModel).
The device combines them for you, and stores the template in both the character buffers
As a last step; you need to store this template in your storage (Store)
For searching the finger: You'll take finger image once, generate a character file in BufferID : 1 and search the library (Search). This performs a linear search and returns the finger id along with confidence score.
There's also another method (GR_Identify); does all of the above automatically.
The question about optimization isn't applicable here, you're using a 3rd party device and you have to follow the working instructions whether it's optimized or not.
The sensor stores the image in 256 * 288 pixels and if I take this file to host at maximum data rate it takes ~5(256 * 288*8/115200) seconds. which seems very large.
I don't really get what you mean by this, but the template file ( that you intend to upload to your host ) is 512 bytes, I don't think it should take much time.
If you want an overview of how this system is implemented; Adafruit's Library is a good reference.

Solution to small files bottleneck in hdfs

I have hundreds of thousands of small csv files in hdfs. Before merging them into a single dataframe, I need to add an id to each file individually (or else in the merge it won't be possible to distinguish between data from different files).
Currently I am relying on yarn to distribute the processes that I create that add the id to each file and convert to parquet format. I find that no matter how I tune the cluster (in size/executor/memory) that the bandwidth is limited at 2000-3000 files/h.
for i in range(0,numBatches):
fileSlice = fileList[i*batchSize:((i+1)*batchSize)]
p = ThreadPool(numNodes)
logger.info('\n\n\n --------------- \n\n\n')
logger.info('Starting Batch : ' + str(i))
logger.info('\n\n\n --------------- \n\n\n')
p.map(lambda x: addIdCsv(x), fileSlice)
def addIdCsv(x):
logId=x[logId]
filePath=x[filePath]
fLogRaw = spark.read.option("header", "true").option('inferSchema', 'true').csv(filePath)
fLogRaw = fLogRaw.withColumn('id', F.lit(logId))
fLog.write.mode('overwrite').parquet(filePath + '_added')
You can see that my cluster is underperforming on CPU. But on the YARN manager it is given 100% access to resources.
What is the best was to solve this part of a data pipeline? What is the bottleneck?
Update 1
The jobs are evenly distributed as you can see in the event timeline visualization below.
As per #cricket_007 suggestion, Nifi provides a good easy solution to this problem which is more scalable and integrates better with other frameworks than plain python. The idea is to read the files into Nifi before writing to hdfs (in my case they are in S3). There is still an inherent bottleneck of reading/writing to S3 but has a throughput around 45k files/h.
The flow looks like this.
Most of the work is done in the ReplaceText processor that finds the end of line character '|' and adds the uuid and a newline.

Profiling a multi-tiered, distributed, web application (server side)

I would like to profile a complex web application from the server PoV.
According to the wikipedia link above, and the Stack Overflow profiling tag description, profiling (in one of its forms) means getting a list (or a graphical representation) of APIs/components of the application, each with the number of calls and time spent in it during run-time.
Note that unlike a traditional one-program/one-language a web server application may be:
Distributed over multiple machines
Different components may be written in different languages
Different components may be running on top of different OSes, etc.
So the traditional "Just use a profiler" answer is not easily applicable to this problem.
I'm not looking for:
Coarse performance stats like the ones provided by various log-analysis tools (e.g. analog) nor for
client-side, per-page performance stats like the ones presented by tools like Google's Pagespeed, or Yahoo! Y!Slow, waterfall diagrams, and browser component load times)
Instead, I'm looking for a classic profiler-style report:
number of calls
call durations
by function/API/component-name, on the server-side of the web application.
Bottom line, the question is:
How can one profile a multi-tiered, multi-platform, distributed web application?
A free-software based solution is much preferred.
I have been searching the web for a solution for a while and couldn't find anything satisfactory to fit my needs except some pretty expensive commercial offerings. In the end, I bit the bullet, thought about the problem, and wrote my own solution which I wanted to freely share.
I'm posting my own solution since this practice is encouraged on SO
This solution is far from perfect, for example, it is at very high level (individual URLs) which may not good for all use-cases. Nevertheless, it has helped me immensely in trying to understand where my web-app spends its time.
In the spirit on open source and knowledge sharing, I welcome any other, especially superior, approaches and solutions from others.
Thinking of how traditional profilers work, it should be straight-forward to come up with a general free-software solution to this challenge.
Let's break the problem into two parts:
Collecting the data
Presenting the data
Collecting the data
Assume we can break our web application into its individual
constituent parts (API, functions) and measure the time it takes
each of these parts to complete. Each part is called thousands of
times a day, so we could collect this data over a full day or so on
multiple hosts. When the day is over we would have a pretty big and
relevant data-set.
Epiphany #1: substitute 'function' with 'URL', and our existing
web-logs are "it". The data we need is already there:
Each part of a web API is defined by the request URL (possibly
with some parameters)
The round-trip times (often in microseconds) appear on each line
We have a day, (week, month) worth of lines with this data handy
So if we have access to standard web-logs for all the distributed
parts of our web application, part one of our problem (collecting
the data) is solved.
Presenting the data
Now we have a big data-set, but still no real insight.
How can we gain insight?
Epiphany #2: visualize our (multiple) web-server logs directly.
A picture is worth a 1000 words. Which picture can we use?
We need to condense 100s of thousands or millions lines of multiple
web-server logs into a short summary which would tell most of the
story about our performance. In other words: the goal is to generate
a profiler-like report, or even better: a graphical profiler report,
directly from our web logs.
Imagine we could map:
Call-latencies to one dimension
Number of calls to another dimension, and
The function identities to a color (essentially a 3rd dimension)
One such picture: a stacked-density chart of latencies by API
appears below (functions names were made-up for illustrative purposes).
The Chart:
Some observations from this example
We have a tri-modal distribution representing 3 radically
different 'worlds' in our application:
The fastest responses, are centered around ~300 microseconds
of latency. These responses are coming from our varnish cache
The second fastest, taking a bit less than 0.01 seconds on
average, are coming from various APIs served by our middle-layer
web application (Apache/Tomcat)
The slowest responses, centered around 0.1 seconds and
sometimes taking several seconds to respond to, involve round-trips
to our SQL database.
We can see how dramatic caching effects can be on an application
(note that the x-axis is on a log10 scale)
We can specifically see which APIs tend to be fast vs slow, so
we know what to focus on.
We can see which APIs are most often called each day.
We can also see that some of them are so rarely called, it is hard to even see their color on the chart.
How to do it?
The first step is to pre-process and extract the subset needed-data
from the logs. A trivial utility like Unix 'cut' on multiple logs
may be sufficient here. You may also need to collapse multiple
similar URLs into shorter strings describing the function/API like
'registration', or 'purchase'. If you have a multi-host unified log
view generated by a load-balancer, this task may be easier. We
extract only the names of the APIs (URLs) and their latencies, so we
end up with one big file with a pair of columns, separated by TABs
*API_Name Latency_in_microSecs*
func_01 32734
func_01 32851
func_06 598452
...
func_11 232734
Now we run the R script below on the resulting data pairs to produce
the wanted chart (using Hadley Wickham's wonderful ggplot2 library).
Voilla!
The code to generate the chart
Finally, here's the code to produce the chart from the API+Latency TSV data file:
#!/usr/bin/Rscript --vanilla
#
# Generate stacked chart of API latencies by API from a TSV data-set
#
# ariel faigon - Dec 2012
#
.libPaths(c('~/local/lib/R',
'/usr/lib/R/library',
'/usr/lib/R/site-library'
))
suppressPackageStartupMessages(library(ggplot2))
# grid lib needed for 'unit()':
suppressPackageStartupMessages(library(grid))
#
# Constants: width, height, resolution, font-colors and styles
# Adapt to taste
#
wh.ratio = 2
WIDTH = 8
HEIGHT = WIDTH / wh.ratio
DPI = 200
FONTSIZE = 11
MyGray = gray(0.5)
title.theme = element_text(family="FreeSans", face="bold.italic",
size=FONTSIZE)
x.label.theme = element_text(family="FreeSans", face="bold.italic",
size=FONTSIZE-1, vjust=-0.1)
y.label.theme = element_text(family="FreeSans", face="bold.italic",
size=FONTSIZE-1, angle=90, vjust=0.2)
x.axis.theme = element_text(family="FreeSans", face="bold",
size=FONTSIZE-1, colour=MyGray)
y.axis.theme = element_text(family="FreeSans", face="bold",
size=FONTSIZE-1, colour=MyGray)
#
# Function generating well-spaced & well-labeled y-axis (count) breaks
#
yscale_breaks <- function(from.to) {
from <- 0
to <- from.to[2]
# round to 10 ceiling
to <- ceiling(to / 10.0) * 10
# Count major breaks on 10^N boundaries, include the 0
n.maj = 1 + ceiling(log(to) / log(10))
# if major breaks are too few, add minor-breaks half-way between them
n.breaks <- ifelse(n.maj < 5, max(5, n.maj*2+1), n.maj)
breaks <- as.integer(seq(from, to, length.out=n.breaks))
breaks
}
#
# -- main
#
# -- process the command line args: [tsv_file [png_file]]
# (use defaults if they aren't provided)
#
argv <- commandArgs(trailingOnly = TRUE)
if (is.null(argv) || (length(argv) < 1)) {
argv <- c(Sys.glob('*api-lat.tsv')[1])
}
tsvfile <- argv[1]
stopifnot(! is.na(tsvfile))
pngfile <- ifelse(is.na(argv[2]), paste(tsvfile, '.png', sep=''), argv[2])
# -- Read the data from the TSV file into an internal data.frame d
d <- read.csv(tsvfile, sep='\t', head=F)
# -- Give each data column a human readable name
names(d) <- c('API', 'Latency')
#
# -- Convert microseconds Latency (our weblog resolution) to seconds
#
d <- transform(d, Latency=Latency/1e6)
#
# -- Trim the latency axis:
# Drop the few 0.001% extreme-slowest outliers on the right
# to prevent them from pushing the bulk of the data to the left
Max.Lat <- quantile(d$Latency, probs=0.99999)
d <- subset(d, Latency < Max.Lat)
#
# -- API factor pruning
# Drop rows where the APIs is less than small % of total calls
#
Rare.APIs.pct <- 0.001
if (Rare.APIs.pct > 0.0) {
d.N <- nrow(d)
API.counts <- table(d$API)
d <- transform(d, CallPct=100.0*API.counts[d$API]/d.N)
d <- d[d$CallPct > Rare.APIs.pct, ]
d.N.new <- nrow(d)
}
#
# -- Adjust legend item-height &font-size
# to the number of distinct APIs we have
#
API.count <- nlevels(as.factor(d$API))
Legend.LineSize <- ifelse(API.count < 20, 1.0, 20.0/API.count)
Legend.FontSize <- max(6, as.integer(Legend.LineSize * (FONTSIZE - 1)))
legend.theme = element_text(family="FreeSans", face="bold.italic",
size=Legend.FontSize,
colour=gray(0.3))
# -- set latency (X-axis) breaks and labels (s.b made more generic)
lat.breaks <- c(0.00001, 0.0001, 0.001, 0.01, 0.1, 1, 10)
lat.labels <- sprintf("%g", lat.breaks)
#
# -- Generate the chart using ggplot
#
p <- ggplot(data=d, aes(x=Latency, y=..count../1000.0, group=API, fill=API)) +
geom_bar(binwidth=0.01) +
scale_x_log10(breaks=lat.breaks, labels=lat.labels) +
scale_y_continuous(breaks=yscale_breaks) +
ggtitle('APIs Calls & Latency Distribution') +
xlab('Latency in seconds - log(10) scale') +
ylab('Call count (in 1000s)') +
theme(
plot.title=title.theme,
axis.title.y=y.label.theme,
axis.title.x=x.label.theme,
axis.text.x=x.axis.theme,
axis.text.y=y.axis.theme,
legend.text=legend.theme,
legend.key.height=unit(Legend.LineSize, "line")
)
#
# -- Save the plot into the png file
#
ggsave(p, file=pngfile, width=WIDTH, height=HEIGHT, dpi=DPI)
Your discussion of "back in the day" profiling practice is true.
There's just one little problem it always had:
In non-toy software, it may find something, but it doesn't find much, for a bunch of reasons.
The thing about opportunities for higher performance is, if you don't find them, the software doesn't break, so you just can pretend they don't exist.
That is, until a different method is tried, and they are found.
In statistics, this is called a type 2 error - a false negative.
An opportunity is there, but you didn't find it.
What it means is if somebody does know how to find it, they're going to win, big time.
Here's probably more than you ever wanted to know about that.
So if you're looking at the same kind of stuff in a web app - invocation counts, time measurements, you're not liable to do better than the same kind of non-results.
I'm not into web apps, but I did a fair amount of performance tuning in a protocol-based factory automation app many years ago.
I used a logging technique.
I won't say it was easy, but it did work.
The people I see doing something similar is here, where they use what they call a waterfall chart.
The basic idea is rather than casting a wide net and getting a lot of measurements, you trace through a single logical thread of transactions, analyzing where delays are occurring that don't have to.
So if results are what you're after, I'd look down that line of thinking.

Why is downloading from Azure blobs taking so long?

In my Azure web role OnStart() I need to deploy a huge unmanaged program the role depends on. The program is previously compressed into a 400-megabytes .zip archive, splitted to files 20 megabytes each and uploaded to a blob storage container. That program doesn't change - once uploaded it can stay that way for ages.
My code does the following:
CloudBlobContainer container = ... ;
String localPath = ...;
using( FileStream writeStream = new FileStream(
localPath, FileMode.OpenOrCreate, FileAccess.Write ) )
{
for( int i = 0; i < blobNames.Size(); i++ ) {
String blobName = blobNames[i];
container.GetBlobReference( blobName ).DownloadToStream( writeStream );
}
writeStream.Close();
}
It just opens a file, then writes parts into it one by one. Works great, except it takes about 4 minutes when run from a single core (extra small) instance. Which means the average download speed about 1,7 megabytes per second.
This worries me - it seems too slow. Should it be so slow? What am I doing wrong? What could I do instead to solve my problem with deployment?
Adding to what Richard Astbury said: An Extra Small instance has a very small fraction of bandwidth that even a Small gives you. You'll see approx. 5Mbps on an Extra Small, and approx. 100Mbps on a Small (for Small through Extra Large, you'll get approx. 100Mbps per core).
The extra small instance has limited IO performance. Have you tried going for a medium sized instance for comparison?
In some ad-hoc testing I have done in the past I found that there is no discernable difference between downloading 1 large file and trying to download in parallel N smaller files. It turns out that the bandwidth on the NIC is usually the limiting factor no matter what and that a large file will just as easily saturate it as many smaller ones. The reverse is not true, btw. You do benefit by uploading in parallel as opposed to one at a time.
The reason I mention this is that it seems like you should be using 1 large zip file here and something like Bootstrapper. That would be 1 line of code for you to download, unzip, and possibly run. Even better, it won't do it more than once on reboot unless you force it to.
As others have already aptly mentioned, the NIC bandwidth on the XS instances is vastly smaller than even a S instance. You will see much faster downloads by bumping up the VM size slightly.

Resources