Reducing Latency for GPT-J - huggingface-transformers

I'm using GPT-J locally on a Nvidia RTX 3090 GPU. Currently, I'm using the model in the following way:
config = transformers.GPTJConfig.from_pretrained("EleutherAI/gpt-j-6B")
tokenizer = transformers.AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B", pad_token='<|endoftext|>', eos_token='<|endoftext|>', truncation_side='left')
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B",
revision="float16",
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
use_cache=True,
gradient_checkpointing=True,
)
model.to('cuda')
prompt = self.tokenizer(text, return_tensors='pt', truncation=True, max_length=2048)
prompt = {key: value.to('cuda') for key, value in prompt.items()}
out = model.generate(**prompt,
n=1,
min_length=16,
max_new_tokens=75,
do_sample=True,
top_k=35,
top_p=0.9,
batch_size=512,
temperature=0.75,
no_repeat_ngram_size=4,
clean_up_tokenization_spaces=True,
use_cache=True,
pad_token_id=tokenizer.eos_token_id
)
res = tokenizer.decode(out[0])
As input to the model I'm using 2048 tokens and I produce 75 tokens as output. The latency is around 4-5 seconds. In the following blog post, I've read that using pipelines latency can be improved and that tokenization can be a bottleneck.
Can the tokenization be improved for my code and would using a pipeline reduce the latency? Are there any other things I can do to reduce the latency?

Related

How to reduce tensorflow dataset input pipeline host device (cpu) time (currently ~40%)?

I am trying to replicate the resnet18 paper. Before running this on the full Image Net dataset on disk, I'm doing some evaluation runs with the publicly available imagenette/320px dataset from TFDS (much much smaller subset of imagenet with 10 classes, already in .tfrecord format._
Note: the full notebook to do training and tracing is available here: resnet18_baseline.ipynb Just switch to a GPU runtime and run all the cells. It's already set-up with tensorboard profiling on the second batch. (You can use TPU as well, but some keras.layers.experimental.preprocessing layers do not support TPU ops yet and you have to enable soft device placement. Please use a GPU).
Input Operations
read images from the input dataset. These images usually have got different dimensions and we need some crop function because input tensors can not have different dimensions for batching. Therefore, for training I use random crop and for testing/validation datasets a center crop.
random_crop_layer = keras.layers.experimental.preprocessing.RandomCrop(224, 224)
center_crop_layer = keras.layers.experimental.preprocessing.CenterCrop(224, 224)
#tf.function(experimental_relax_shapes=True) # avoid retracing
def train_crop_fn(x, y):
return random_crop_layer(x), y
#tf.function(experimental_relax_shapes=True)
def eval_crop_fn(x, y):
return center_crop_layer(x), y
Perform some simple preprocessing/augmentations to the input data. These include rescaling to 0-1 and also scaling based on mean and stdev of the rgb colours on imagenet. Also, random
rescaling_layer = keras.layers.experimental.preprocessing.Rescaling(1./255)
train_preproc = keras.Sequential([
rescaling_layer
])
# from https://github.com/tensorflow/models/blob/master/official/vision/image_classification/preprocessing.py
# Calculated from the ImageNet training set
MEAN_RGB = (0.485 , 0.456, 0.406)
STDDEV_RGB = (0.229, 0.224, 0.225)
#tf.function
def z_score_scale(x):
return (x - MEAN_RGB) / STDDEV_RGB
#tf.function
def train_preproc_fn(x, y):
return z_score_scale(train_preproc(x)), y
#tf.function
def eval_preproc_fn(x, y):
return z_score_scale(eval_preproc(x)), y
Input Pipeline
def get_input_pipeline(input_ds, bs, crop_fn, augmentation_fn):
ret_ds = (
input_ds
.batch(1) # pre-crop are different dimensions and can't be batched
.map(crop_fn,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
.unbatch()
.batch(bs)
.map(augmentation_fn, # augmentations can be batched though.
num_parallel_calls=tf.data.experimental.AUTOTUNE)
)
return ret_ds
# dataset loading
def load_imagenette():
train_ds, ds_info = tfds.load('imagenette/320px', split='train', as_supervised=True, with_info=True)
valid_ds = tfds.load('imagenette/320px', split='validation', as_supervised=True)
return train_ds, valid_ds, valid_ds, ds_info.features['label'].num_classes
# pipeline construction
train_ds, valid_ds, test_ds, num_classes = load_imagenette()
# datasets used for training (notice that I use prefetch here)
train_samples = get_input_pipeline(train_ds, BS, train_crop_fn, train_preproc_fn).prefetch(tf.data.experimental.AUTOTUNE)
valid_samples = get_input_pipeline(valid_ds, BS, eval_crop_fn, eval_preproc_fn).prefetch(tf.data.experimental.AUTOTUNE)
test_samples = get_input_pipeline(test_ds, BS, eval_crop_fn, eval_preproc_fn).prefetch(tf.data.experimental.AUTOTUNE)
Training and Profiling
I use tensorboard profiler to check the second batch size and I get a warning that this is highly input bound, with about 40% of processing wasted on inputs.
For a classic resnet18 model, you can drive the batch size up to 768 without getting a OOM error, which is what I use. A single step with bs 256 takes about 2-3 seconds.
I also get a warning that on_train_batch_size_end is slow, at around ~1.5 seconds, compared to the 1s batch time.
The model training code is very simple keras:
model.fit(
train_samples,
validation_data=valid_samples,
epochs=100,
batch_size=BS,
use_multiprocessing=True
callbacks=[tensorboard_callback, model_checkpoint_callback, early_stop_callback, reduce_lr_callback]
)
and the callbacks are specified as:
log_dir = os.path.join(os.getcwd(), 'logs')
tensorboard_callback = TensorBoard(log_dir=log_dir, update_freq="epoch", profile_batch=2)
reduce_lr_callback = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, min_lr=0.001, verbose=1)
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath='model.{epoch:02d}-{val_loss:.4f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True)
early_stop_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=15)
Lastly, here are some sample tensorboard profiling screenshots. I can't figure out how to make this run faster:

Keras Inception-v3 fine-tuning workaraound

I am trying to fine-tune Inception-v3, but no matter which layer I choose to freeze I get random predictions. I found that other people are having the same problem: https://github.com/keras-team/keras/issues/9214 . It seems that the problem comes from setting the BN layer to not trainable.
Now I am trying to get the output of the last layer I want to freeze and use it as an input to the following layers, which I will then train:
train_generator = train_datagen.flow_from_directory(
os.path.join(directory, "train_data"),
target_size=size,
interpolation="bilinear",
classes=["a", "b", "c","d"],
batch_size=1,
shuffle=False) base_model = InceptionV3(weights='imagenet', include_top=True, input_shape=(299, 299, 3))
model_features = Model(inputs=base_model.input, outputs=base_model.get_layer(
self.Inception_Fine_Tune_Layers[layer_freeze]).output)
#I want to use this as input
values_train = model_features.predict_generator(train_generator, verbose=1)
However, I get Memory error like this, although I have 12Gb, which is more than what I need:
....
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 3268864 totalling 3.12MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 3489024 totalling 3.33MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 4211968 totalling 4.02MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 5129472 totalling 4.89MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:700] Sum Total of in-use chunks: 3.62GiB
I tensorflow/core/common_runtime/bfc_allocator.cc:702] Stats:
Limit: 68719476736
InUse: 3886957312
MaxInUse: 3889054464
NumAllocs: 3709
MaxAllocSize: 8388608
Any suggestion how to fix that or another workaround to fine-tune Inception will be very helpful.
I can't tell if you're preprocessing your input properly from what you've provided. However, Keras provides functions for preprocessing that are specific to the pre-trained net, in this case Inception V3.
from keras.applications.inception_v3 import preprocess_input
Try adding this to your data generator as the preprocessing function like so...
train_generator = train_datagen.flow_from_directory(
os.path.join(directory, "train_data"),
preprocessing_function=preprocess_input, # <---
target_size=size,
interpolation="bilinear",
classes=["a", "b", "c","d"],
batch_size=1,
shuffle=False)
You should then be able to unfreeze all of the layers, or the select few that you want to train.
Hope that helps!

reading in parallel from" generator in Keras

I have a big dataset divided in files.
I would like to read and process my data one file at the time and for this I have this keras generator:
def myGenerator():
while 1:
rnd = random.randint(1,200)
strRnd = str(rnd)
lenRnd = len(strRnd)
rndPadded = strRnd.rjust(5, '0')
nSearchesInBatch = 100
f = "path/part-" + rndPadded + "*" #read one block of data
data = sqlContext.read.load(f).toPandas()
imax = int(data.shape[0]/nSearchesInBatch) #number of batches that will be created sequentially from the generator
for i in range(imax):
data_batch = data[i*nSearchesInBatch:(i+1)*nSearchesInBatch]
features = data_batch['features']
output = data_batch['output']
yield features, output
The problem is that the reading takes the biggest part (each file is around 200mb), and in the meanwhile the GPU sits waiting, it is possible to pre-read the next batch while the GPU is traning on the previous one?
At the moment one file is read and split in steps (the inner loop), the CPUs are hidden and the GPU training, as soon as the epoch finishes, the GPU goes idle and the cpu start reading (which takes 20/30 seconds).
Any solution to parallelize this?

Spark withColumn performance

I wrote some code in spark as follows:
val df = sqlContext.read.json("s3n://blah/blah.gz").repartition(200)
val newdf = df.select("KUID", "XFF", "TS","UA").groupBy("KUID", "XFF","UA").agg(max(df("TS")) as "TS" ).filter(!(df("UA")===""))
val dfUdf = udf((z: String) => {
val parser: UserAgentStringParser = UADetectorServiceFactory.getResourceModuleParser();
val readableua = parser.parse(z)
Array(readableua.getName,readableua.getOperatingSystem.getName,readableua.getDeviceCategory.getName)
})
val df1 = newdf.withColumn("useragent", dfUdf(col("UA"))) ---PROBLEM LINE 1
val df2= df1.map {
case org.apache.spark.sql.Row(col1:String,col2:String,col3:String,col4:String, col5: scala.collection.mutable.WrappedArray[String]) => (col1,col2,col3,col4, col5(0), col5(1), col5(2))
}.toDF("KUID", "XFF","UA","TS","browser", "os", "device")
val dataset =df2.dropDuplicates(Seq("KUID")).drop("UA")
val mobile = dataset.filter(dataset("device")=== "Smartphone" || dataset("device") === "Tablet" ).
mobile.write.format("com.databricks.spark.csv").save("s3n://blah/blah.csv")
Here is a sample of the input data
{"TS":"1461762084","XFF":"85.255.235.31","IP":"10.75.137.217","KUID":"JilBNVgx","UA":"Flixster/1066 CFNetwork/758.3.15 Darwin/15.4.0" }
So in the above code snippet, i am reading a gz file of 2.4GB size. The read is taking 9minutes.The i group by ID and take the max timestamp.However(at PROBLEM LINE 1) the line which adds a column(with Column) is taking 2 hours.This line takes a User Agent and tries to derive OS,Device, Broswer info. Is this the wrong way to do things here.
I am running this on 4 node AWS cluster with r3.4xlarge ( 8 cores and 122Gb memory) with the following configuration
--executor-memory 30G --num-executors 9 --executor-cores 5
The problem here is that gzip is not splittable, and cannot be read in parallel. What happens in the background is that a single process will download the file from the bucket and then it will repartition it to distribute the data across the cluster. Please re-encode the input data to a splittable format. If the input file does not change a lot, you could for example consider bzip2 (because encoding is quite expensive and might take some time).
Update: Picking up answer from Roberto and sticking it here for the benefit of all
You are creating a new parser for every row within the UDF : val parser: UserAgentStringParser = UADetectorServiceFactory.getResourceModuleParser(); . It's probably expensive to construct it, you should construct one outside the UDF and use it as a closure

Pyaudio : how to check volume

I'm currently developping a VOIP tool in python working as a client-server. My problem is that i'm currently sending the Pyaudio input stream as follows even when there is no sound (well, when nobody talks or there is no noise, data is sent as well) :
CHUNK = 1024
p = pyaudio.PyAudio()
stream = p.open(format = pyaudio.paInt16,
channels = 1,
rate = 44100,
input = True,
frames_per_buffer = CHUNK)
while 1:
self.conn.sendVoice(stream.read(CHUNK))
I would like to check volume to get something like this :
data = stream.read(CHUNK)
if data.volume > 20%:
self.conn.sendVoice(data)
This way I could avoid sending useless data and spare connection/ increase performance. (Also, I'm looking for some kind of compression but I think I will have to ask it in another topic).
Its can be done using root mean square (RMS).
One way to build your own rms function using python is:
def rms( data ):
count = len(data)/2
format = "%dh"%(count)
shorts = struct.unpack( format, data )
sum_squares = 0.0
for sample in shorts:
n = sample * (1.0/32768)
sum_squares += n*n
return math.sqrt( sum_squares / count )
Another choice is use audioop to find rms:
data = stream.read(CHUNK)
rms = audioop.rms(data,2)
Now if do you want you can convert rms to decibel scale decibel = 20 * log10(rms)

Resources