I've seen a bunch of these questions, most notable this one, which all say pretty much the same thing: This error is caused by the modification time of the source files being in the future, which usually occurs on a mounted NFS when the server clock and client clock are not in sync.
I've tried to touch all the files in my directory, as many have suggested. when that didn't work, I actually attempted copying all files out of the mounted drive and into a local drive, touching them again, then rerunning the build, and I still get the same error. Is there any other way to solve this problem?
The NFS server and NFS client's system times are out of sync. The NFS server is probably drifting ahead.
Running make on an NFS mount is sensitive to the millisecond level so client/server system times must be tight as a drum. This can be done by having your NFS client(s) sync their time off of the NFS server's time using NTP at the highest rate allowed (usually 8 seconds). On a LAN this should get you sub-millisecond accuracy.
Install NTP on both the NFS client(s) and the NFS server.
On the NTP config file of the clients (ntp.conf in linux), comment out the entries starting with 'pool' or 'server' and add the line:
server [put address of the nfs server here] minpoll 3 maxpoll 3
... The '3' is a power-of-two in seconds for the polling interval, hence 8 seconds. The NFS server's NTP config file can probably be left alone.
Restart the ntpd service on your client.
Test that your client is syncing by using the linux command within the client:
ntpq -p
... the important part is that your 'reach' column is not zero for long as that means it cannot contact the server's NTP.
If they don't sync, you may have to reboot the client and server. This may be the case with Synology NAS as the NTP server.
Perform a full clean of your build (even nuke the directory and re-clone if convenient) and try again.
Similar answers are throughout the internet, but they suggest simply installing NTP to the machines. This wasn't good enough to solve the issue for me - they weren't synced tightly enough. A better way is to sync the clients' clocks to the server's clock on the local network at very frequent intervals. This is frowned upon with the internet but cheap on a LAN.
If this isn't possible, at least try to ensure NTP on the clients and server uses the same time servers in its pool/server entries.
If you are using Windows check if you compiling on a FAT file-system and if so try to switch.
FAT has a 2 second resolution, so its possible for your build to add to an archive, compile the next file, but detect that the archive is already up to date. Time resolutions for other file systems are listed in another answer.
If you must FAT consider the .LOW_RESOLUTION_TIME special target.
Related
I have a closed network with a few nodes that are mutually consistent in time. For this I use NTP with one node as the NTP server. One of the nodes is a dumb box over which I have little control. It runs an sntp client to synchronize time to the system NTP server. I now need the box to be set to a time that is offset from the system time by an amount that I control. I am trying to find out if this can be done using only the available sntp client on the box. I will now present you my approach and would love to hear from anyone who knows if this can be done.
As far as I found out a standard NTP server cannot be made to serve a time that is offset from the server's system time. I will therefore have to write my own implementation. The conceptually simplest NTP server must be a broadcast-only server. My thought is that I will be able to set the sntp box to listen to broadcast and then just send NTP broadcast packets set to my custom time.
Are there any NTP server implementations that allow me to do this out of the box?
Can anyone tell me how hard it is to write an sNTP broadcast server - or any other NTP server?
Does anyone know of any tutorials for how to write an NTP server?
Are there any show-stoppers to the scheme I am describing above?
To try to answer the questions that will inevitably come up:
Yes, I am also thinking about a new interface on the box to set the time to a value I specify. But that is not what I am asking about, and no, it will not be much simpler.
I have inverstigated if I could just use the time that the box needs as the system time. This is not an option. I will need two different times, one for the system and one for the box.
All insight will be appreciated! Even opinions like "it should be doable."
You could use Jans to serve a fake time. I have no experience with this product but I know of it from the ntp mailing list. It will allow you to server faketime but it does none of the clock discipline like the reference implementation.
More info: http://www.vanheusden.com/time/jans/
Jans on its own is not suitable to provide fake time with offset, but it can provide real time plus a lot of test functionality like time drift, so on.
I used Jans as the source of real time in conjunction with llibfaketime on linux CentOs 6 as fake NTP server with + or - offset.
Just wget jans-0.3.tgz and run "make" from here:
https://www.vanheusden.com/time/jans/
RPM of libfaketime for CentOs 6 is here:
http://rpm.pbone.net/info_idpl_54489387_distro_centos6_com_libfaketime-0.9.7-1.1.x86_64.rpm.html
or find it for your distro.
Stop real NTP server if its running on your linux:
service ntpd stop
Run fake NTP server (for examle 15 days in the past):
LD_PRELOAD=/usr/lib64/libfaketime.so.1 FAKETIME="-15d" ./jans -P 123 -t real
Keep in mind that NTP server can be running only on port 123, otherwise you should use iptables masquerading.
My Xen domu's keep drifting their times. My dom0 is using kernel 3.2.0 for AMD 64. DomU's are using 2.6.26. How do you keep the time from drifting?
I guess your Domu's are getting drifted w.r.t your Dom0 time. In that case , you can do one of the following:-
Configure ntp (Network time protocol) on your Domu. It is a fairly simple process. You can make changes in your /etc/ntp.conf file (assuming it is a linux domu) to include Dom0 as a ntp server and then start ntp daemon ("service ntpd start"). This way domu can sync their time with Dom0.
Configure Dom0 and all Domu's so that they sync their time with an external server. Make changes in ntp configuration file of all and then restart ntp on all of them. This way all of them will be in sync with some external source and hence time drift should not happen.
For an immediate time sync , you can use "ntpdate" command on Domu.
I was wondering how the windows host-name resolution system works.
More precisely I wonder about the use, or lack thereof, of local caching in the process.
According to Microsoft TCP/IP Host Name Resolution Order, the process is as follows:
The client checks to see if the name queried is its own.
The client then searches a local Hosts file, a list of IP address and names stored on the local computer.
Domain Name System (DNS) servers are queried.
If the name is still not resolved, NetBIOS name resolution sequence is used as a backup. This order can be changed by configuring the NetBIOS node type of the client.
What I was wondering is, whether stage (2) is cached in some way.
The sudden interest arose this last few days, as I installed a malware protection (SpyBot) that utilizes the HOSTS file. In fact, it is now 14K entries big, and counting...
The file is currently sorted according to host name, but this of course doesn't have to be.
lg(14K), means 14 steps through the file for each resolution request. These request probably arrive at a rate of a few every second, and usually to the same few hundred hosts (tops).
My view of how this should work is like this:
On system startup the windows DNS-resolution mechanism loads the HOSTS file a single time.
It commits a single iteration over it that sorts file. A working copy is loaded into memory.
The original HOSTS file, will not be further read throughout the resolution's process' life.
All network-processes (IE, Firefox, MSN...) work via this process/mechanism.
No other process directly interfaces/reads HOSTS file.
Upon receiving a name resolution request, the process check its memory-resident cache.
If it finds the proper IP then is answers appropriately.
Otherwise (it's not cached), the resolution process continues to the memory resident (sorted) HOSTS file, and does a quick binary search over it. From here on, the process continues as originally described.
The result of the resolution is cached for further use.
Though I am not sure as to the significance of these, I would really appreciate an answer.
I just want to see if my reasoning is right, and if not, why so?
I am aware that in this age of always-on PCs the cache must be periodically (or incrementally) purged. I ignore this for now.
In the DNS Client service (dnsrslvr) you can see a function called LoadHostFileIntoCache. It goes something like this:
file = HostsFile_Open(...);
if (file)
{
while (HostsFile_ReadLine(...))
{
Cache_RecordList(...);
...
}
HostsFile_Close(...);
}
So how does the service know when the hosts file has been changed? At startup a thread is created which executes NotifyThread, and it calls CreateHostsFileChangeHandle, which calls FindFirstChangeNotificationW to start monitoring the drivers\etc directory. When there's a change the thread clears the cache using Cache_Flush.
Your method does not work when the ip address of a known hostname is changed in hosts without adding or changing the name.
Technet says that the file will be loaded into the DNS client resolver cache.
IMO this is mostly irrelevant: A lookup in a local file (once its in the disk cache) will still be several orders of magnitude faster than asking the DNS servers of your ISP.
I don't think that each process maintains it's own cache. If there is a cache, it probably exists in the TCP/IP stack or kernel somewhere, and even then, only for a very short while.
I've had situations where I'll be tinkering around with my hosts file and then using the addresses in a web browser and it will update the resolved names without me having to restart the browser.
Is there a utility for Windows that allows you to test different aspects of file transfer operations across a Lan or a Wan.
Example...
How long does it take to move a file of a known size (500 MB or 1 GB) from Server A (on site) to Server B (on site) or to Server C (off site-Satellite location)?
D-ITG will allow you to test many aspects of your links. It does not necessarily allow you transfer a file directly, but it allows you to control almost all aspects of the transmission of data across the wire.
If all you are interested in is bulk transfer time (and not all the nitty-gritty details) you could just use a basic FTP application and time the transfer.
Probably nothing you've not already figured out. You could get some coarse grain metrics using a batch file to coordinate:
start monitoring
copy file
stop monitoring
Copy file might just be initiating a file copy between two nodes on the LAN, or it might initiate a FTP copy between two nodes on the WAN.
Monitoring could be as basic as writing the current time to output or file, or it could be as complex as adding performance counter metrics from the network adapter on the two machines.
A commercial WAN emulator would also give you the information your looking for. I've used the Shunra Appliance successfully in the past. Its pretty expensive, so I'd really only recommend it if critical business success is riding on understanding how application behavior could change based on network conditions and is something you could incorporate into regular testing activities.
I have over 500 machines distributed across a WAN covering three continents. Periodically, I need to collect text files which are on the local hard disk on each blade. Each server is running Windows server 2003 and the files are mounted on a share which can be accessed remotely as \server\Logs. Each machine holds many files which can be several Mb each and the size can be reduced by zipping.
Thus far I have tried using Powershell scripts and a simple Java application to do the copying. Both approaches take several days to collect the 500Gb or so of files. Is there a better solution which would be faster and more efficient?
I guess it depends what you do with them ... if you are going to parse them for metrics data into a database, it would be faster to have that parsing utility installed on each of those machines to parse and load into your central database at the same time.
Even if all you are doing is compressing and copying to a central location, set up those commands in a .cmd file and schedule it to run on each of the servers automatically. Then you will have distributed the work amongst all those servers, rather than forcing your one local system to do all the work. :-)
The first improvement that comes to mind is to not ship entire log files, but only the records from after the last shipment. This of course is assuming that the files are being accumulated over time and are not entirely new each time.
You could implement this in various ways: if the files have date/time stamps you can rely on, running them through a filter that removes the older records from consideration and dumps the remainder would be sufficient. If there is no such discriminator available, I would keep track of the last byte/line sent and advance to that location prior to shipping.
Either way, the goal is to only ship new content. In our own system logs are shipped via a service that replicates the logs as they are written. That required a small service that handled the log files to be written, but reduced latency in capturing logs and cut bandwidth use immensely.
Each server should probably:
manage its own log files (start new logs before uploading and delete sent logs after uploading)
name the files (or prepend metadata) so the server knows which client sent them and what period they cover
compress log files before shipping (compress + FTP + uncompress is often faster than FTP alone)
push log files to a central location (FTP is faster than SMB, the windows FTP command can be automated with "-s:scriptfile")
notify you when it cannot push its log for any reason
do all the above on a staggered schedule (to avoid overloading the central server)
Perhaps use the server's last IP octet multiplied by a constant to offset in minutes from midnight?
The central server should probably:
accept log files sent and queue them for processing
gracefully handle receiving the same log file twice (should it ignore or reprocess?)
uncompress and process the log files as necessary
delete/archive processed log files according to your retention policy
notify you when a server has not pushed its logs lately
We have a similar product on a smaller scale here. Our solution is to have the machines generating the log files push them to a NAT on a daily basis in a randomly staggered pattern. This solved a lot of the problems of a more pull-based method, including bunched-up read-write times that kept a server busy for days.
It doesn't sound like the storage servers bandwidth would be saturated, so you could pull from several clients at different locations in parallel. The main question is, what is the bottleneck that slows the whole process down?
I would do the following:
Write a program to run on each server, which will do the following:
Monitor the logs on the server
Compress them at a particular defined schedule
Pass information to the analysis server.
Write another program which sits on the core srver which does the following:
Pulls compressed files when the network/cpu is not too busy.
(This can be multi-threaded.)
This uses the information passed to it from the end computers to determine which log to get next.
Uncompress and upload to your database continuously.
This should give you a solution which provides up to date information, with a minimum of downtime.
The downside will be relatively consistent network/computer use, but tbh that is often a good thing.
It will also allow easy management of the system, to detect any problems or issues which need resolving.
NetBIOS copies are not as fast as, say, FTP. The problem is that you don't want an FTP server on each server. If you can't process the log files locally on each server, another solution is to have all the server upload the log files via FTP to a central location, which you can process from. For instance:
Set up an FTP server as a central collection point. Schedule tasks on each server to zip up the log files and FTP the archives to your central FTP server. You can write a program which automates the scheduling of the tasks remotely using a tool like schtasks.exe:
KB 814596: How to use schtasks.exe to Schedule Tasks in Windows Server 2003
You'll likely want to stagger the uploads back to the FTP server.