Does ADODB.Stream chunks conflicts with Response.Buffer? [duplicate] - vbscript

I have the following script, which works good, locally (Windows 10 IIS, windows 2003 Server), but not on our hosting server (Windows 2003 Server). Anything over 4mb will download really slow and then timeout before it gets to the end of the file. However, locally, it downloads fast and full.
Doing a Direct Download (link to the file itself) downloads a 26.5mb file in 5 seconds from our hosting provider server. So, there is not an issue with a download limit. There is an issue it seems, with the hosting server and this script. Any ideas?
Response.AddHeader "content-disposition","filename=" & strfileName
Response.ContentType = "application/x-zip-compressed" 'here your content -type
Dim strFilePath, lSize, lBlocks
Const CHUNK = 2048
set objStream = CreateObject("ADODB.Stream")
objStream.Open
objStream.Type = 1
objStream.LoadFromfile Server.MapPath("up/"&strfileName&"")
lSize = objStream.Size
Response.AddHeader "Content-Size", lSize
lBlocks = 1
Response.Buffer = False
Do Until objStream.EOS Or Not Response.IsClientConnected
Response.BinaryWrite(objStream.Read(CHUNK))
Loop
objStream.Close

Just looking at the code snippet it appear to be fine and is the very approach I would use for downloading large files (especially like the use of Response.IsClientConnected).
However having said that, it's likely the size of the chunks being read in relation to the size of the file.
Very roughly the formula is something like this...
time to read = ((file size / chunk size) * read time)
So if we use your example of a 4 MB file (4194304 bytes) and say it takes 100 milliseconds to read each chunk then the following applies;
Chunk Size of 2048 bytes (2 KB) will take approx. 3 minutes to read.
Chunk Size of 20480 bytes (20 KB) will take approx. 20 seconds to read.
Classic ASP pages on IIS 7 and above have a default scriptTimeout of 00:01:30 so in the example above a 4 MB file constantly read at 100 milliseconds in 2 KB chunks would timeout before the script could finish.
Now these are just rough statistics your read time won't constantly stay the same and it's likely faster then 100 milliseconds (depending on disk read speeds) but I think you get the point.
So just try increasing the CHUNK.
Const CHUNK = 20480 'Read in chunks of 20 KB

The code I have is bit different, using a For..Next loop instead of Do..Until loop. Not 100% sure this will really work in your case, but worth a try. Here is my version of the code:
For i = 1 To iSz / chunkSize
If Not Response.IsClientConnected Then Exit For
Response.BinaryWrite objStream.Read(chunkSize)
Next
If iSz Mod chunkSize > 0 Then
If Response.IsClientConnected Then
Response.BinaryWrite objStream.Read(iSz Mod chunkSize)
End If
End If

Basically is due the script timeout. I had the same problem with 1GB files in IIS 10 after upgraded to Win 2016 with IIS 10 (default timeout is shorter by default).
I use chunks of 256000 and Server.ScriptTimeout = 600 '10 minutes

Related

Storing the data using Parallel Programming

The requirement is as follows:
We have a third party client from where we need to take the data and store in the database(ultimately).
The way client is sharing data to us is through the dll's having functions(the dll is built out of C++ code) and we need to call those functions with appropriate parameters and we get the result.
Declare Function wcmo_deal Lib "E:\IGB\System\Intex\vcmowrap\vcmowr64.dll" (
ByRef WCMOarg_Handle As String,
ByRef WCMOarg_User As String,
ByRef WCMOarg_Options As String,
ByRef WCMOarg_Deal As String,
ByRef WCMOarg_DataOut As String,
ByRef WCMOarg_ErrOut As String) _
As Long
wcmo_deal(wcmo_deal_WCMOarg_Handle, WCMOarg_User, WCMOarg_Options, WCMOarg_Deal, WCMOarg_DataOut, WCMOarg_ErrOut)
Here WCMOarg_DataOut is the data we get and that needs to be stored.
Similar to the above method we have 10 more methods(so,total 11 methods) which pull the data and that data(string of around 500 KB to 1 MB each) is stored in the files using the below method :
File.WriteAllText(logPath & sDealName & ".txt", sDealName & " - " & WCMOarg_ErrOut & vbCrLf)
Now these method calls run for each deal. So for a single deal we get output in 11 different folders with the text file stored with the data received from the client.
There are 5000 deals totally for which we need to call these methods and the data gets stored in the files.
The way this functionality has been implemented is by using Parallel Programming with Master-Child relationship as follows:
Dim opts As New ParallelOptions
opts.MaxDegreeOfParallelism = System.Environment.ProcessorCount
Parallel.ForEach(dealList, opts, Sub(deal)
If Len(deal) > 0 Then
Dim dealPass As String = ""
Try
If dealPassDict.ContainsKey(deal.ToUpper) Then
dealPass = dealPassDict(deal.ToUpper)
End If
Dim p As New Process()
p.StartInfo.FileName = "E:\IGB_New\CMBS Intex Data Deal v2.0.exe"
p.StartInfo.Arguments = deal & "|" & keycode & "|" & dealPass & "|" & clArgs(1) & "|"
p.StartInfo.UseShellExecute = False
p.StartInfo.CreateNoWindow = True
p.Start()
p.WaitForExit()
Catch ex As Exception
exceptions.Enqueue(ex)
End Try
End If
End Sub)
where CMBS Intex Data Deal v2.0.exe is the child code which will execute 5000 times as deallist contains 5000 deals.
The CMBS Intex Data Deal v2.0.exe code contains the code of calling the dlls and storing the data in the files mentioned above.
Issues faced :
The code was run keeping the Master and Child code in one single place but we get out of memory exception after 3000 deals.[for 32 GB RAM,Processor Count =16]
The above code(Master-Child) is also taking up a lot of memory, it runs fine upto 4800 deals in one hour(the memory usage gradually reaches 100% at around 4800 deals) and then for the remaining 200 deals it takes close to 1 hour(so , totally 2 hours).[for 32 GB RAM,Processor Count =16]
The reason Master child was tried was on the assumption that GC will take care of the Memory disposal of all the objects in the Child.
After the data is stored in text files, a perl script runs and loads the data into the database.
Approach tried:
Instead of keeping the data in text files and then storing into the database, I tried storing the data into the DB directly without putting them in the files (assuming I/O operations consume a lot of memory),but this too didnt work as the DB crashed/Hangs everytime.
Note:
All the handles related to the DLL is properly being closed.
The call to the DLL's method consume a lot of memory,but nothing can be done to reduce it as it cant be controlled by us.
The reason to use Parallel approach is if we go with sequential approach, it would take many hours to fetch and load the data and we need to run this twice a day as the data keeps changing, so need to be up-to-date with the latest data from the client.
There was a CPU maxing out issue as well but that has been resolved by keeping the MaxDegreeOfParallelism = System.Environment.ProcessorCount.
Question :
Is there a way to reduce the time taken by the process to complete.
Currently it takes 2 hours to complete but that could be due to no memory remaining as it reaches 4800 deals, and without any memory it cannot process any further.
Is there a way to reduce memory consumption here by trying out a different way to execute this or there is something if changed in the same code could make it work?
Parallelism is most likely totally useless. You are bound by the IO, not by the CPU. The IO is the bottleneck and parallelization might even make it worse. You might experiment by using a RAM drive and then copy all the output to the actual storage.

Nginx and sysctl configuration - Performance setting

Nginx is acting as a reverse proxy for adserver, receiving 20k requests per minute. Response happens within 100ms from the adserver to the nginx
Running on a Virtual Machine with configuration as
128GB RAM
4 vCPU
100GB HDD
Considering above, what is good setting of Nginx and also sysctl.conf
Please keep in mind that kernel tuning is complex and requires a lot of evaluation until you get the correct results. If someone spots a mistake please let me know so that I can adjust my own configuration :-)
Also, your memory is quite high for the amount of requests if this server is only running Nginx, you could check how much you are using during peak hours and adjust accordingly.
An important thing to check is the amount of file descriptors, in your situation I would set it to 65.000 to cope with the 20.000+ requests per second. The reason is that in a normal situation you would only require about 4.000 file descriptors as you have 4.000 simultanious open connections (20.000 * 2 * 0.1). However in case of an issue with a back end it could take up to 1 second or more to load an advertisement. In that case the amount of simultanious open connections would be higher:
20.000 * 2 * 1.5 = 60.000.
So setting it to 65K would in my opinion be a save value.
You can check the amount of file descriptors via:
cat /proc/sys/fs/file-max
If this is below the 65000 you'll need to set this in the /etc/sysctl.conf:
fs.file-max = 65000
Also for Nginx you'll need to add the following in the file: /etc/systemd/system/nginx.service.d/override.conf
[Service]
LimitNOFILE=65000
In the nginx.conf file:
worker_rlimit_nofile 65000;
When added you will need to apply the changes:
sudo sysctl -p
sudo systemctl daemon-reload
sudo systemctl restart nginx
After these settings the following settings will get you started:
vm.swappiness = 0 # The kernel will swap only to avoid an out of memory condition
vm.min_free_kbytes = 327680 # The kernel will start swapping when memory is below this limit (300MB)
vm.vfs_cache_pressure = 125 # reclaim memory which is used for caching of VFS caches quickly
vm.dirty_ratio = 15 # Write pages to disk when 15% of memory is dirty
vm.dirty_background_ratio = 10 # System can start writing pages to disk when 15% of memory is dirty
Additionally I use the following security settings in my sysctl configuration in conjunction with the tunables above. Feel free to use them, for credits
# Avoid a smurf attack
net.ipv4.icmp_echo_ignore_broadcasts = 1
# Turn on protection for bad icmp error messages
net.ipv4.icmp_ignore_bogus_error_responses = 1
# Turn on syncookies for SYN flood attack protection
net.ipv4.tcp_syncookies = 1
# Turn on and log spoofed, source routed, and redirect packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
# No source routed packets here
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
# Turn on reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
# Make sure no one can alter the routing tables
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
# Don't act as a router
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
# Turn on execshild
kernel.exec-shield = 1
kernel.randomize_va_space = 1
As you are proxying request I would add the following to your sysctl.conf file to make sure you are not running out of ports, it is optional but if you are running into issues it is something to keep in mind:
net.ipv4.ip_local_port_range=1024 65000
As I normally evaluate the default settings and adjust accordingly I did not supply the IPv4 and ipv4.tcp_ options. You can find an example below but please do not copy and paste, you'll be required to do some reading before you start tuning these variables.
# Increase TCP max buffer size setable using setsockopt()
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 87380 8388608
# Increase Linux auto tuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to at least 4MB, or higher if you use very high BDP paths
# Tcp Windows etc
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_window_scaling = 1
The above parameters is not everything you should consider, there are many more parameters that you can tune, for example:
Set the amount of worker processes to 4 (one per CPU core).
Tune the backlog queue.
If you do not need an acccess log I would simply turn it off to remove the disk I/O.
Optionally: lower or disable gzip compression if your CPU usage is getting to high.

SAP RFC heavy upload - 3MB txt file produces 150MB upload

I have a problem with startRFC.exe that produces much bigger network-traffic than required. startRFC has 3 parameters = 3 internal tables = 3 CSV files. Total size of these files that are sent to SAP is 3MB, but it takes 15minutes and totally is uploaded 150MBs.
Has anyone experienced this?
POSSIBLE SOLUTION: So it seams that our traffic 150MB was correct although filesize was only 3MB. Problem is that if there is in startRFC defined row-length 1300 (for an Internal Table), startRFC automatically padds all rows with spaces to the max-length. We had cca 6000 rows per 13000 characters = 78MB if 1 char = 1 byte. If 1 char = 2 bytes then 150MB is obvious result

Setting Size of String Buffer When Accessing Windows Registry

I'm working on a VBScript web application that has a newly-introduced requirement to talk to the registry to pull connection string information instead of using hard-coded strings. I'm doing performance profiling because of the overhead this will introduce, and noticed in Process Monitor that reading the value returns two BUFFER OVERFLOW results before finally returning a success.
Looking online, Mark Russinovich posted about this topic a few years back, indicating that since the size of the registry entry isn't known, a default buffer of 144 bytes is used. Since there are two buffer overflow responses, the amount of time taken by the entire call is approximately doubled (and yes, I realize the difference is 40 microseconds, but with 1,000 or more page hits per second, I'm willing to invest some time in optimization).
My question is this: is there a way to tell WMI what the size of the registry value is before it tries to get it? Here's a sample of the code I'm using to access the registry:
svComputer = "." ' Local machine is simply "."
ivHKey = &H80000002 ' HKEY_LOCAL_MACHINE = &H80000002 (from WinReg.h)
svRegPath = "SOFTWARE\Path\To\My\Values"
Set oRegistry = GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & svComputer & "\root\default:StdRegProv")
oRegistry.GetStringValue ivHKey, svRegPath, "Value", svValue
In VBScript strings are strings. They are however long they need to be. You don't pre-define their length. Also, if performance is that much of an issue for you, you should consider using a compiled instead of an interpreted language (or a cache for values you read before).

Copy file over TCP socket in Ruby slow

I'm in need to transfer a file via sockets:
# sender
require 'socket'
SIZE = 1024 * 1024 * 10
TCPSocket.open('127.0.0.1', 12345) do |socket|
File.open('c:/input.pdf', 'rb') do |file|
while chunk = file.read(SIZE)
socket.write(chunk)
end
end
end
# receiver
require 'socket'
require 'benchmark'
SIZE = 1024 * 1024 * 10
server = TCPServer.new("127.0.0.1", 12345)
puts "Server listening..."
client = server.accept
time = Benchmark.realtime do
File.open('c:/output.pdf', 'w') do |file|
while chunk = client.read(SIZE)
file.write(chunk)
end
end
end
file_size = File.size('c:/output.pdf') / 1024 / 1024
puts "Time elapsed: #{time}. Transferred #{file_size} MB. Transfer per second: #{file_size / time} MB" and exit
Using Ruby 1.9 i get a transfer rate of ~ 16MB/s (~ 22MB/s using 1.8) when transfering a 80MB PDF file from / to localhost. I'm new to socket programming, but that seems pretty slow compared to just using FileUtils.cp. Is there anything i'm doing wrong?
Well, even with localhost, you still have to go through some of the TCP stack, introducing inevitable delays with packet fragmentation and rebuilding. It probably doesn't go out on the wire where you'd be limited to 100 megaBITs (~12.5 MB/s) per second or a gigibit (~125 MB/s) theoretical maximum.
None of that overhead exists for raw file copying disk to disk. You should keep in mind that even SATA1 gave you 1.5 gigabits/sec and I'd be surprised if you were still running on that backlevel. On top of that, your OS itself will undoubtedly be caching a lot of stuff, not possible when sending over the TCP stack.
16MB per second doesn't sound too bad to me.
I know this question is old, but why can't you compress before you send, then decompress on the receiving end?
require 'zlib'
def compress(input)
Zlib::Deflate.deflate(input)
end
def decompress(input)
Zlib::Inflate.inflate(input)
end
(Shameless plug) AFT (https://github.com/wlib/aft) already does what you're making

Resources