Strange pause when python download file from ftp - download

I have some code, where i check some directories on ftp server and download new files on my server. There are above 3 million files on server (zip archives). I am doing many not optimize things in this code, but all of them works fast, except part with downloading. Here is this part:
lf = open(local_filename, "wb") //here i create blank file
print ("opened")
try:
ftp.retrbinary("RETR "+name, lf.write) //here i write data
print ("wrote")
except ftplib.error_perm:
pass
lf.close() //here i close file with data
print ("closed")
my problem in the part between print ("opened") and print ("wrote"). My python console (2.7) keep silence for 10-20 second on this fase, but size of downloading files is very tiny. Its below 2-3 Kb.
Strange thing in next: when i start script from my own PC (windows 7), it works great and fast, but when i start it on windows server 2012 R2 (VDS), i got this sadly pause. Guys, i need your help. What should i do for configuration my server and fast downloading?

i got the answer. just need to run next command:
netsh int tcp set global ecncapability=disabled
and everything will be excellent!

Related

Upload to HDFS stops with warning "Slow ReadProcessor read"

When I try to upload files that are about 20 GB into HDFS they usually upload till about 12-14 GB then they stop uploading and I get a bunch of these warnings through command line
"INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP-222805046-10.66.4.100-1587360338928:blk_1073743783_2960 took 62414ms (threshold=30000ms); ack: seqno: 226662 reply: SUCCESS downstreamAckTimeNanos: 0 flag: 0, targets:"
However, if I try to upload the files like 5-6 times, they sometimes work after the 4th or 5th attempt. I believe if I alter some data node storage settings I can achieve consistent uploads without issues but I don't know what parameters to modify in the hadoop configurations. Thanks!
Edit: This happens when I put the file into HDFS through python program which uses a subprocess call to put the file in. However, even if I directly call it from command line I still run into the same issue.

what is the file ORA_DUMMY_FILE.f in oracle?

oracle version: 12.2.0.1
As you know, these are then unix processes for the parallel servers in oracle:
ora_p000_ora12c
ora_p001_ora12c
....
ora_p???_ora12c
They can be seen also with the view: gv$px_process.
The spid for each parallel server can be obtained from there.
Then I look for the open files associated with te parallel server here:
ls -l /proc/<spid>/fd
And I'm obtaining around 500-10000 file descriptors for several parallel servers equal to this one:
991 -> /u01/app/oracle/admin/ora12c/dpdump/676185682F2D4EA0E0530100007FFF5E/ORA_DUMMY_FILE.f (deleted)
I've deleted them using:(actually I've create a small script for doing it because there are thousands of them)
gdb -p <spid>
gdb> p close(<fd_id>)
But after some hours the file descriptors start being created again (hundreds every day)
If they are not deleted then eventually the linux limit is reached and any parallel query throws an error like this:
ORA-12801: error signaled in parallel query server P001
ORA-01116: error in opening database file 132
ORA-01110: data file 132: '/u02/oradata/ora12c/pdbname/tablespacenaname_ts_1.dbf'
ORA-27077: too many files open
Does anyone have any idea of how and why this file descriptors are being created, and how to avoid it?.
Edited: Added some more information that could be useful.
I've tested that when a new PDB is created a directory DATA_PUMP_DIR is created in it (select * from all_directories) that is pointing to:
/u01/app/oracle/admin/ora12c/dpdump/<xxxxxxxxxxxxx>
The linux directory is also created.
Also one file descriptor is created pointing to ORA_DUMMY_FILE.f in the new dpdump subdirectory like the ones described initially
lsof | grep "ORA_DUMMY_FILE.f (deleted)"
/u01/app/oracle/admin/ora12c/dpdump/<xxxxxxxxxxxxx>/ORA_DUMMY_FILE.f (deleted)
This may be ok, the problem I face is the continuos growing of the file descriptors pointing to ORA_DUMMY_FILE that reach the linux limits.

tailing log file on windows2008 server

I use the following command to tail a logfile on a w2k8 server from a windows8 client pc:
get-content "file" -wait
The log file shows up and it sits there patiently waiting for new lines to be added,
but new lines never show up when they are added.
It worked fine on w2k3 server but somehow tailing on w2k8 server does not work.
The log file is updated from a C# service:
Trace.Listeners.Add(new TextWriterTraceListener(logFileName, "fileListener"));
Trace.WriteLine(....)
Does anybody know what to do about this?
I repro'd the issue on my WS08 system using Trace class. I tried both Trace.Flush() and writing lots of data (100K) and neither caused get-content -wait to respond.
However I did find a workaround. You'd have to update your C# program. (While experimenting I came to the conclusion that gc -wait is pretty fragile.)
$twtl= new-object diagnostics.TextWriterTraceListener "C:\temp\delme.trace",
"filelistener"
[diagnostics.trace]::Listeners.add($twtl)
# This sequence would result in gc -wait displaying output
[diagnostics.trace]::WriteLine("tracee messagee thingee")
[diagnostics.trace]::flush()
# Here I did file name completion such that c:\temp\delme.trace was in the
# completion list.
# In other words I typed something like d and pressed tab
# And this worked every time
# After searching for quite a while I finally found that
# get-itemproperty c:\temp\d*
# produced the same effect. The following sequence always worked:
# (without requiring a tab press)
[diagnostics.trace]::WriteLine("tracee messagee thingee")
[diagnostics.trace]::flush()
get-itemproperty c:\temp\d*
# To finish up
[diagnostics.trace]::Listeners.remove($twtl)
$twtl.close()
$twtl.dispose()
I think there's a bug somewhere. I suggest filing it on the Connect website.

FTP client to zip before upload and unzip on the server after upload

I am always working with some big websites that is annoying to upload given the number of small files.
I use Filezilla but am happy to buy some commercial solution if there is one out there that can zip the files before upload and then unzip it after upload.
Its a pain to have to manually do that all the time.
If someone know of any ftp client or extension for Filezilla or other that would do that... I sent an email to the support for CuteFTP and WSFtp - no answer so far...
I know FTP protocol does not allow this command - thats why Im asking for a extension (if anyone know) or a free or commercial FTP client that do the job...
use this in a php file maybe called : zip.php
*<?php $zip = new ZipArchive(); $res = $zip->open('yourzipfile.zip'); if ($res === true{
$zip->extractTo('./');
$zip->close();
echo 'ok'; } else
echo 'failed'; ?>*
zip you site and upload it at the root of your server.
Also upload the zip.php at the same place
now enter this in your browser :www.yoursite.com/zip.php
If everything goes well, you will receive "ok"; otherwise there is a problem
For more details on the class: http://www.php.net/manual/en/class.ziparchive.php
Couldn't you set up some bash scripts to rar and ftp a file and then on the server check for a file's presence every x seconds and unrar and remove when it is there?

Why is the read-only attribute set (sometimes) for files created by my service?

NOTE: This is a complete re-write of this question. I'd previously conflated some ACL issues with the problem I'm hunting, which is probably why there were no answers.
I have a windows service that uses the standard open/close/write routines to write a log file (it reads stuff from a pipe and stuffs it into the log). A new log file is opened each day at midnight. The system is Windows XP Embedded.
The service runs as the Local System service (CreateService with NULL for the user).
When the service initially starts up, it creates a log file and writes to it with no problems. At this point everything is OK, and you can restart the service (or the computer) with no issues.
However, at midnight (when the day changes), the service creates a new log file and writes to it. The funny thing is, this new log file has the 'read only' flag set. That's a problem because if the service (or the computer) restarts, the service can no longer open the file for writing.
Here's the relevant information from the system with the problem having already happened:
Directory of C:\bbbaudit
09/16/2009 12:00 AM <DIR> .
09/16/2009 12:00 AM <DIR> ..
09/16/2009 12:00 AM 437 AU090915.ADX
09/16/2009 12:00 AM 62 AU090916.ADX
attrib c:\bbbaudit\*
A C:\bbbaudit\AU090915.ADX <-- old log file (before midnight)
A R C:\bbbaudit\AU090916.ADX <-- new log file (after midnight)
cacls output:
C:\ BUILTIN\Administrators:(OI)(CI)F
NT AUTHORITY\SYSTEM:(OI)(CI)F
CREATOR OWNER:(OI)(CI)(IO)F
BUILTIN\Users:(OI)(CI)R
BUILTIN\Users:(CI)(special access:)
FILE_APPEND_DATA
BUILTIN\Users:(CI)(IO)(special access:)
FILE_WRITE_DATA
Everyone:R
C:\bbbaudit BUILTIN\Administrators:(OI)(CI)F
NT AUTHORITY\SYSTEM:(OI)(CI)F
CFN3\Administrator:F
CREATOR OWNER:(OI)(CI)(IO)F
Here's the code I use to open/create the log files:
static int open_or_create_file(char *fname, bool &alreadyExists)
{
int fdes;
// try to create new file, fail if it already exists
alreadyExists = false;
fdes = open(fname, O_WRONLY | O_APPEND | O_CREAT | O_EXCL);
if (fdes < 0)
{
// try to open existing, don't create new file
alreadyExists = true;
fdes = open(fname, O_WRONLY | O_APPEND);
}
return fdes;
}
I'm having real trouble figuring out how the file is getting that read-only flag on it. Anyone who can give me a clue or a direction, I'd greatly appreciate it.
Compiler is VC 6 (Yea, I know, it's so far out of date it isn't funny. Until you realize that we're just now upgraded to XPE from NT 3.51).
The Microsoft implementation of open() has an optional third argument 'pmode', which is required to be present when the second argument 'oflag' includes the O_CREAT flag. The pmode argument specifies the file permission settings, which are set when the new file is closed for the first time. Typically you would pass S_IREAD | S_IWRITE for pmode, resulting in an ordinary read/write file.
In your case you have specified O_CREAT but omitted the third argument, so open() has used whatever value happened to be on the stack at the third argument position. The value of S_IWRITE is 0x0080, so if the value in the third argument position happened to have bit 7 clear, it would result in a read-only file. The fact that you got a read-only file only some of the time, is consistent with stack junk being passed as the third argument.
Below is the link for the Visual Studio 2010 documentation for open(). This aspect of the function's behaviour has not changed since VC 6.
http://msdn.microsoft.com/en-us/library/z0kc8e3z.aspx
Well, I have no idea what the underlying problem is with the 'open' APIs in this case. In order to 'fix' the problem, I ended up switching to using the Win32 APIs for file management (CreateFile, WriteFile, CloseHandle).

Resources