How would I limit upload speed from the server in node.js? - performance

How would I limit upload speed from the server in node.js?
Is this even an option?
Scenario: I'm writing some methods to allow users to automated-ly upload files to my server. I want to limit the upload speed to (for instance) 50kB/s (configurable of course).

I do not think you can force a client to stream at a predefined speed, however you can control the "average speed" of the entire process.
var startTime = Date.now(),
totalBytes = ..., //NOTE: you need the client to give you the total amount of incoming bytes
curBytes = 0;
stream.on('data', function(chunk) { //NOTE: chunk is expected to be a buffer, if string look for different ways to get bytes written
curBytes += chunk.length;
var offsetTime = calcReqDelay(targetUploadSpeed);
if (offsetTime > 0) {
stream.pause();
setTimeout(offsetTime, stream.resume);
}
});
function calcReqDelay(targetUploadSpeed) { //speed in bytes per second
var timePassed = Date.now() - startTime;
var targetBytes = targetUploadSpeed * timePassed / 1000;
//calculate how long to wait (return minus in case we actually should be faster)
return waitTime;
}
This is of course pseudo code, but you probably get the point. There may be another, and better, way which I do not know about. In such case, I hope someone else will point it out.
Note that it is also not very precise, and you may want to have a different metric than the average speed.

Use throttle module to control the pipe stream speed
npm install throttle
var Throttle = require('throttle');
// create a "Throttle" instance that reads at 1 b/s
var throttle = new Throttle(1);
req.pipe(throttle).pipe(gzip).pipe(res);

Instead of rolling your own, the normal way to do this in production, is to let your load balancer or entry server throttle the incoming requests. See http://en.wikipedia.org/wiki/Bandwidth_throttling. It's not typically something an application needs to handle itself.

Related

THREE.Audio filter not ramping up with linearRampToValueAtTime

I'm having a little trouble working with the linearRampToValueAtTime on a BiQuadFilter applied to a WebAudio.
The audio works ok, and the initial lowpass filter is applied.
Problem is, as soon as I use the linearRamp method to bring up the frequency, it seems to ignore the endTime parameter (or better, it's not time correctly).
Some code to explain it better.
Here's the instancing:
this.audioLoader.load( 'public/media/soundtrack-es_cobwebs_in_the_sky.mp3', buffer => {
this.sounds.soundtrack = new THREE.Audio(this.listener);
const audioContext = this.sounds.soundtrack.context;
this.biquadFilter = audioContext.createBiquadFilter();
this.biquadFilter.type = "lowpass"; // Low pass filter
this.biquadFilter.frequency.setValueAtTime(200, audioContext.currentTime);
this.sounds.soundtrack.setBuffer(buffer);
this.sounds.soundtrack.setFilter(this.biquadFilter);
this.sounds.soundtrack.setVolume(0.5);
this.sounds.soundtrack.play();
})
Until here, everything looks ok. The sound plays muffled as needed.
Then, after a certain event, there's a camera transition, where I want the sound to gradually open up.
As a endTime parameter, I'm passing 2 seconds + the internal context delta.
this.sounds.soundtrack.filters[0].frequency.linearRampToValueAtTime(2400, 2 + this.sounds.soundtrack.context.currentTime);
Expecting to hear the ramp in two seconds, but the sound opens up immediately.
What am I missing?
The linear ramp will be applied using the previous event as the startTime. In your case that will be audioContext.currentTime at the point in time when you created the filter. If that is sufficiently long ago it will sound as if the ramp jumps right to the end value. You can fix that by inserting a new event right before the ramp.
const currentTime = this.sounds.soundtrack.context.currentTime;
const filter = this.sounds.soundtrack.filters[0];
filter.frequency.setValueAtTime(200, currentTime);
filter.frequency.linearRampToValueAtTime(2400, currentTime + 2);

XTestFakeKeyEvent calls get swallowed

I'm trying to spoof keystrokes; to be a bit more precise: I'm replaying a number of keystrokes which should all get sent at a certain time - sometimes several at the same time (or at least as close together as reasonably possible).
Implementing this using XTestFakeKeyEvent, I've come across a problem. While what I've written so far mostly works as it is intended and sends the events at the correct time, sometimes a number of them will fail. XTestFakeKeyEvent never returns zero (which would indicate failure), but these events never seem to reach the application I'm trying to send them to. I suspect that this might be due to the frequency of calls being too high (sometimes 100+/second) as it looks like it's more prone to fail when there's a large number of keystrokes/second.
A little program to illustrate what I'm doing, incomplete and without error checks for the sake of conciseness:
// #includes ...
struct action {
int time; // Time where this should be executed.
int down; // Keydown or keyup?
int code; // The VK to simulate the event for.
};
Display *display;
int nactions; // actions array length.
struct action *actions; // Array of actions we'll want to "execute".
int main(void)
{
display = XOpenDisplay(NULL);
nactions = get_actions(&actions);
int cur_time;
int cur_i = 0;
struct action *cur_action;
// While there's still actions to execute.
while (cur_i < nactions) {
cur_time = get_time();
cur_action = actions + cur_i;
// For each action that is (over)due.
while ((cur_action = actions + cur_i)->time <= cur_time) {
cur_i++;
XTestFakeKeyEvent(display, cur_action->code,
cur_action->down, CurrentTime);
XFlush(display);
}
// Sleep for 1ms.
nanosleep((struct timespec[]){{0, 1000000L}}, NULL);
}
}
I realize that the code above is very specific to my case, but I suspect that this is a broader problem - which is also why I'm asking this here.
Is there a limit to how often you can/should flush XEvents? Could the application I'm sending this to be the issue, maybe failing to read them quickly enough?
It's been a little while but after some tinkering, it turned out that my delay between key down and key up was simply too low. After setting it to 15ms the application registered the actions as keystrokes properly and (still) with very high accuracy.
I feel a little silly in retrospect, but I do feel like this might be something others could stumble over as well.

Stream publishing using ffmpeg rtmp: network bandwidth not fully utilized

I'm developing an application that needs to publish a media stream to an rtmp "ingestion" url (as used in YouTube Live, or as input to Wowza Streaming Engine, etc), and I'm using the ffmpeg library (programmatically, from C/C++, not the command line tool) to handle the rtmp layer. I've got a working version ready, but am seeing some problems when streaming higher bandwidth streams to servers with worse ping. The problem exists both when using the ffmpeg "native"/builtin rtmp implementation and the librtmp implementation.
When streaming to a local target server with low ping through a good network (specifically, a local Wowza server), my code has so far handled every stream I've thrown at it and managed to upload everything in real time - which is important, since this is meant exclusively for live streams.
However, when streaming to a remote server with a worse ping (e.g. the youtube ingestion urls on a.rtmp.youtube.com, which for me have 50+ms pings), lower bandwidth streams work fine, but with higher bandwidth streams the network is underutilized - for example, for a 400kB/s stream, I'm only seeing ~140kB/s network usage, with a lot of frames getting delayed/dropped, depending on the strategy I'm using to handle network pushback.
Now, I know this is not a problem with the network connection to the target server, because I can successfully upload the stream in real time when using the ffmpeg command line tool to the same target server or using my code to stream to a local Wowza server which then forwards the stream to the youtube ingestion point.
So the network connection is not the problem and the issue seems to lie with my code.
I've timed various parts of my code and found that when the problem appears, calls to av_write_frame / av_interleaved_write_frame (I never mix & match them, I am always using one version consistently in any specific build, it's just that I've experimented with both to see if there is any difference) sometimes take a really long time - I've seen those calls sometimes take up to 500-1000ms, though the average "bad case" is in the 50-100ms range. Not all calls to them take this long, most return instantly, but the average time spent in these calls grows bigger than the average frame duration, so I'm not getting a real time upload anymore.
The main suspect, it seems to me, could be the rtmp Acknowledgement Window mechanism, where a sender of data waits for a confirmation of receipt after sending every N bytes, before sending any more data - this would explain the available network bandwidth not being fully used, since the client would simply sit there and wait for a response (which takes a longer time because of the lower ping), instead of using the available bandwidth. Though I haven't looked at ffmpeg's rtmp/librtmp code to see if it actually implements this kind of throttling, so it could be something else entirely.
The full code of the application is too much to post here, but here are some important snippets:
Format context creation:
const int nAVFormatContextCreateError = avformat_alloc_output_context2(&m_pAVFormatContext, nullptr, "flv", m_sOutputUrl.c_str());
Stream creation:
m_pVideoAVStream = avformat_new_stream(m_pAVFormatContext, nullptr);
m_pVideoAVStream->id = m_pAVFormatContext->nb_streams - 1;
m_pAudioAVStream = avformat_new_stream(m_pAVFormatContext, nullptr);
m_pAudioAVStream->id = m_pAVFormatContext->nb_streams - 1;
Video stream setup:
m_pVideoAVStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
m_pVideoAVStream->codecpar->codec_id = AV_CODEC_ID_H264;
m_pVideoAVStream->codecpar->width = nWidth;
m_pVideoAVStream->codecpar->height = nHeight;
m_pVideoAVStream->codecpar->format = AV_PIX_FMT_YUV420P;
m_pVideoAVStream->codecpar->bit_rate = 10 * 1000 * 1000;
m_pVideoAVStream->time_base = AVRational { 1, 1000 };
m_pVideoAVStream->codecpar->extradata_size = int(nTotalSizeRequired);
m_pVideoAVStream->codecpar->extradata = (uint8_t*)av_malloc(m_pVideoAVStream->codecpar->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
// Fill in the extradata here - I'm sure I'm doing that correctly.
Audio stream setup:
m_pAudioAVStream->time_base = AVRational { 1, 1000 };
// Let's leave creation of m_pAudioCodecContext out of the scope of this question, I'm quite sure everything is done right there.
const int nAudioCodecCopyParamsError = avcodec_parameters_from_context(m_pAudioAVStream->codecpar, m_pAudioCodecContext);
Opening the connection:
const int nAVioOpenError = avio_open2(&m_pAVFormatContext->pb, m_sOutputUrl.c_str(), AVIO_FLAG_WRITE);
Starting the stream:
AVDictionary * pOptions = nullptr;
const int nWriteHeaderError = avformat_write_header(m_pAVFormatContext, &pOptions);
Sending a video frame:
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.dts = nTimestamp;
pkt.pts = nTimestamp;
pkt.duration = nDuration; // I know what I have the wrong duration sometimes, but I don't think that's the issue.
pkt.data = pFrameData;
pkt.size = pFrameDataSize;
pkt.flags = bKeyframe ? AV_PKT_FLAG_KEY : 0;
pkt.stream_index = m_pVideoAVStream->index;
const int nWriteFrameError = av_write_frame(m_pAVFormatContext, &pkt); // This is where too much time is spent.
Sending an audio frame:
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.pts = m_nTimestampMs;
pkt.dts = m_nTimestampMs;
pkt.duration = m_nDurationMs;
pkt.stream_index = m_pAudioAVStream->index;
const int nWriteFrameError = av_write_frame(m_pAVFormatContext, &pkt);
Any ideas? Am I on the right track with thinking about the Acknowledgement Window? Am I doing something else completely wrong?
I don't think this explains everything, but, just in case, for someone in a similar situation, the fix/workaround I found was:
1) build ffmpeg with the librtmp implementation of the rtmp protocol
2) build ffmpeg with --enable-network, it adds a couple of features to the librtmp protocol
3) pass "rtmp_buffer_size" parameter to avio_open2, and increase it's value to a satisfactory one
I can't give you a full step-by-step explanation of what was going wrong, but this fixed at least the symptom that was causing me problems.

Parallel loops and Random produce odd results

I just started playing with the Task Parallel Library, and ran into interesting issues; I have a general idea of what is going on, but would like to hear comments from people more competent than me to help understand what is happening. My apologies for the somewhat lengthy code.
I started with a non-parallel simulation of a random walk:
var random = new Random();
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
var simulations = new List<int>();
for (var run = 0; run < 20; run++)
{
var position = 0;
for (var step = 0; step < 10000000; step++)
{
if (random.Next(0, 2) == 0)
{
position--;
}
else
{
position++;
}
}
Console.WriteLine(string.Format("Terminated run {0} at position {1}.", run, position));
simulations.Add(position);
}
Console.WriteLine(string.Format("Average position: {0} .", simulations.Average()));
stopwatch.Stop();
Console.WriteLine(string.Format("Time elapsed: {0}", stopwatch.ElapsedMilliseconds));
Console.ReadLine();
I then wrote my first attempt at a parallel loop:
var localRandom = new Random();
stopwatch.Reset();
stopwatch.Start();
var parallelSimulations = new List<int>();
Parallel.For(0, 20, run =>
{
var position = 0;
for (var step = 0; step < 10000000; step++)
{
if (localRandom.Next(0, 2) == 0)
{
position--;
}
else
{
position++;
}
}
Console.WriteLine(string.Format("Terminated run {0} at position {1}.", run, position));
parallelSimulations.Add(position);
});
Console.WriteLine(string.Format("Average position: {0} .", parallelSimulations.Average()));
stopwatch.Stop();
Console.WriteLine(string.Format("Time elapsed: {0}", stopwatch.ElapsedMilliseconds));
Console.ReadLine();
When I ran it on a virtual machine set to use 1 core only, I observed a similar duration, but the runs are no longer processed in order - no surprise.
When I ran it on a dual-core machine, things went odd. I saw no improvement in time, and observed some very weird results for each run. Most runs end up with results of -1,000,000, (or very close), which indicates that Random.Next is returning 0 quasi all the time.
When I make the random local to each loop, everything works just fine, and I get the expected duration improvement:
Parallel.For(0, 20, run =>
{
var localRandom = new Random();
var position = 0;
My guess is that the problem has to do with the fact that the Random object is shared between the loops, and has some state. The lack of improvement in duration in the "failing parallel" version is I assume due to that fact that the calls to Random are not processed in parallel (even though I see that the parallel version uses both cores, whereas the original doesn't). The piece I really don't get is why the simulation results are what they are.
One separate worry I have is that if I use Random instances local to each loop, I may run into the problem of having multiple loops starting with the same seed (the issue you get when you generate multiple Randoms too close in time, resulting in identical sequences).
Any insight in what is going on would be very valuable to me!
Neither of these approaches will give you really good random numbers.
This blog post covers a lot of approaches for getting better random numbers with Random
Link
These may be fine for many day to day applications.
However if you use the same random number generator on multiple threads even with different seeds you will still impact the quality of your random numbers. This is because you are generating sequences of pseudo-random numbers which may overlap.
This video explains why in a bit more detail:
http://software.intel.com/en-us/videos/tim-mattson-use-and-abuse-of-random-numbers/
If you want really random numbers then you really need to use the crypto random number generator System.Security.Cryptography.RNGCryptoServiceProvider. This is threadsafe.
The Random class is not thread-safe; if you use it on multiple threads, it can get messed up.
You should make a separate Random instance on each thread, and make sure that they don't end up using the same seed. (eg, Environment.TickCount * Thread.CurrentThread.ManagedThreadId)
One core problem:
random.Next is not thread safe.
Two ramifications:
Quality of the randomness is destroyed by race conditions.
False sharing destroys scalability on multicores.
Several possible solutions:
Make random.Next thread safe: solves quality issue but not scalability.
Use multiple PRNGs: solves scalability issue but may degrade quality.
...

Write code to make CPU usage display a sine wave

Write code in your favorite language
and let Windows Task Manager represent
a sine wave in CPU Usage History.
This is a technical interview quiz from Microsoft China.
I think it's a good question. Especially it's worth knowing how candidate understand and figure out the solution.
Edit: It's a good point if may involve multi-core(cpu) cases.
A thread time slice in Windows is 40ms, iirc, so that might be a good number to use as the 100% mark.
unsigned const TIME_SLICE = 40;
float const PI = 3.14159265358979323846f;
while(true)
{
for(unsigned x=0; x!=360; ++x)
{
float t = sin(static_cast<float>(x)/180*PI)*0.5f + 0.5f;
DWORD busy_time = static_cast<DWORD>(t*TIME_SLICE);
DWORD wait_start = GetTickCount();
while(GetTickCount() - wait_start < busy_time)
{
}
Sleep(TIME_SLICE - busy_time);
}
}
This would give a period of about 14 seconds. Obviously this assumes there is no other significant cpu usage in the system, and that you are only running it on a single CPU. Neither of these is really that common in reality.
Here's a slightly modified #flodin's solution in Python:
#!/usr/bin/env python
import itertools, math, time, sys
time_period = float(sys.argv[1]) if len(sys.argv) > 1 else 30 # seconds
time_slice = float(sys.argv[2]) if len(sys.argv) > 2 else 0.04 # seconds
N = int(time_period / time_slice)
for i in itertools.cycle(range(N)):
busy_time = time_slice / 2 * (math.sin(2*math.pi*i/N) + 1)
t = time.perf_counter() + busy_time
while t > time.perf_counter():
pass
time.sleep(time_slice - busy_time);
A CPU-curve can be fine-tuned using time_period and time_slice parameters.
Ok I have a different, probably BETTER solution than my first answer.
Instead of trying to manipulate the CPU, instead hook into the task manager app, force it to draw what you want it to instead of CPU results. Take over the GDI object that plots the graph, etc. Sort of "Cheating" but they didnt say you had to manipulate the CPU
Or even hook the call from task manager that gets the CPU %, returning a sine result instead.
With the literally hundreds (thousands?) of threads a PC runs today, the only way I can think to even come close would be to poll CPU usage as fast as possible, and if the usage% was below where it should be on the curve, to fire off a short method that just churns numbers. That will at least bring the typical low usage UP where needed, but I can't think of a good way to LOWER it without somehow taking control of other threads, and doing something such as forcing thier priority lower.
Something like this:
while(true)
{
for(int i=0;i<360;i++)
{
// some code to convert i into radians if needed
...
Thread.Sleep(Math.Sin(i)*something_that_makes_it_noticeable_number_of_ms+something_that_makes_it_non_negative)
// some work to make cpu busy, may be increased to bigger number to see the influence on the cpu.
for(j=0;j<100;j++);
}
}

Resources