Ignite - explain about 31100 time server port - time

Can anyone please explain, why the ignite is using the 31100 port. I have got the info in web as it is a time server port. I couldn't get anything other than this info.

I see following configuration options in Ignite project:
/** Base port number for time server. */
private int timeSrvPortBase = DFLT_TIME_SERVER_PORT_BASE; // 31100
/** Port number range for time server. */
private int timeSrvPortRange = DFLT_TIME_SERVER_PORT_RANGE; // 100
/**
* Gets base UPD port number for grid time server. Time server will be started on one of free ports in range
* {#code [timeServerPortBase, timeServerPortBase + timeServerPortRange - 1]}.
* <p>
* Time server provides clock synchronization between nodes.
*
* #return Time
*/
public int getTimeServerPortBase() {
return timeSrvPortBase;
}
/**
* Defines port range to try for time server start.
*
* If port range value is <tt>0</tt>, then implementation will try bind only to the port provided by
* {#link #setTimeServerPortBase(int)} method and fail if binding to this port did not succeed.
*
* #return Number of ports to try before server initialization fails.
*/
public int getTimeServerPortRange() {
return timeSrvPortRange;
}
But I don't see any usage of this methods in other places. Looks like and obsolete feature. I've just started one server node of 2.10 and didn't see any open ports in range 311xx (sudo netstat -atnp | grep 311[0-9][0-9] was empty). Are you sure that your Ignite instance exposes this port? What version do you use?

Related

How to describe websocket api using Apidoc?

This issue "Documenting WebSockets #501"
said "apiDoc was not designed for websockets, but i think you can use it too. The #api /endpoint/path must be replaced with the function / message name, the parameters could be documented the same way."
This is my code :
/**
* #api onBinaryMessage Set a binary message handler on the connection.
*
* #apiGroup websocket
* #apiParam {Buffer} buffer
* #apiParam {ServerWebSocket} websocket
*/
result
Is there a standard usage? Orz

Why is php decimal-ext throwing an exception for wrong return type of compareTo method?

I've installed decimal-ext extension and php-decimal/laravel composer package. I'm using it to compare large decimal numbers. On my laptop everything works correctly but on my staging server the following error is thrown:
Return value of Decimal\Decimal::compareTo() must be of the type int, none returned
and here is the code:
(new Decimal($value))->compareTo($maxNumber) == -1;
As I said I'm not getting this error on my laptop.
Laravel: 5.8
PHP: 7.4.3
Server: Ubuntu 18.04
I spent some time on this but figured it out. The decimal-ext extension was not loaded in the server's php.ini file. Php didn't throw an exception about missing extension but about the wrong return type because the class Decimal was actually loaded (it was installed via the composer). I could instantiate an instance but the implementation was missing:
/**
* Ordering
*
* This method is equivalent to the `<=>` operator.
*
* #param mixed $other
*
* #return int 0 if this decimal is considered is equal to $other,
* -1 if this decimal should be placed before $other,
* 1 if this decimal should be placed after $other.
*/
public function compareTo($other): int {}

Why does UDPSocket.send always call getaddrinfo in Ruby?

I just solved a latency issue in our infrastructure that was triggered because this code snippet here triggered a call to getaddrinfo on every run of the code:
sock = UDPSocket.open
sock.send("#{key}|#{value}", 0,
GRAPHITE_SERVER,
STATSD_PORT)
sock.close
Because we use statsd and graphite for high-volume event and stats monitoring, we were effectively triggering numerous calls getaddrinfo on every API call, and potentially tens of thousands every minute.
I modified this code to use the internal IP address, not the DNS name, of our graphite server, and was able to resolve the latency issue (presumably because the internal AWS VPC DNS server was not equipped to handle such a high volume of requests).
Now that my issue is resolved, I would love to know why the UDP implementation in Ruby is not using a cached IP address value (presumably based on the TTL of the domain name entry). Here is the relevant line and the function in full, you can see the call to rsock_addrinfo just at the end:
static VALUE
udp_send(int argc, VALUE *argv, VALUE sock)
{
VALUE flags, host, port;
struct udp_send_arg arg;
VALUE ret;
if (argc == 2 || argc == 3) {
return rsock_bsock_send(argc, argv, sock);
}
rb_scan_args(argc, argv, "4", &arg.sarg.mesg, &flags, &host, &port);
StringValue(arg.sarg.mesg);
GetOpenFile(sock, arg.fptr);
arg.sarg.fd = arg.fptr->fd;
arg.sarg.flags = NUM2INT(flags);
arg.res = rsock_addrinfo(host, port, rsock_fd_family(arg.fptr->fd), SOCK_DGRAM, 0);
ret = rb_ensure(udp_send_internal, (VALUE)&arg,
rsock_freeaddrinfo, (VALUE)arg.res);
if (!ret) rsock_sys_fail_host_port("sendto(2)", host, port);
return ret;
}
I assume this decision is intentional and would love to learn more about the reasons why.
getaddrinfo does not return data about the TTL... because it may not have it at all in fact, as the resolution may not necessarily be done over the DNS (could be hosts file, LDAP, etc. see /etc/nsswitch.conf)
From its manual here is the structure returned:
int getaddrinfo(const char *hostname, const char *servname, const struct addrinfo *hints, struct addrinfo **res);
struct addrinfo {
int ai_flags; /* input flags */
int ai_family; /* protocol family for socket */
int ai_socktype; /* socket type */
int ai_protocol; /* protocol for socket */
socklen_t ai_addrlen; /* length of socket-address */
struct sockaddr *ai_addr; /* socket-address for socket */
char *ai_canonname; /* canonical name for service location */
struct addrinfo *ai_next; /* pointer to next in list */
};
After a successful call to getaddrinfo(), *res is a pointer to a linked list of one or more addrinfo structures.
So it is up to the thing "behind" getaddrinfo to do some caching or not, because getaddrinfo may have used the DNS to retrieve data, or not.
Some specific API for DNS, like getdnsapi will give back to the caller some information on the TTL, see https://getdnsapi.net/documentation/spec/ and example 6.2
6·2 Get IPv4 and IPv6 Addresses for a Domain Name
This example is similar to the previous one, except that it retrieves more information than just the addresses, so it traverses the replies_tree. In this case, it gets both the addresses and their TTLs.
Without any cache layer anywhere, since UDP is stateless, any new send must trigger resolution in some way or form.
You said:
"modified this code to use the internal IP address, not the DNS name"
You should instead install a local (on the box) recursive caching nameserver, such as unbound. All your local applications will benefit from it, and a faster DNS resolution (depending on how /etc/nsswitch.conf, /etc/resolv.conf and /etc/hosts are setup also).
For the associated bug report hinted by #Casper it seems at its core more an issue about IPv6 vs IPv4 which could be solved either by adjusting /etc/gai.conf or equivalent or doing some more clever programming around opening the connection, with the so called "happy eyeball algorithm" where you try to resolve both A and AAAA at the same time which means two parallel DNS queries (because you can not combine them into one per the protocol) and try to use the fastest one coming back, with a slight preference for AAAA if you want to be in the modern camp so you would fire the A one only some given amount of milliseconds after the AAAA to catch the case where you do not get a reply at all for AAAA or a negative one. See RFC6555 for details.

spring redis increment

i want to substitute memcached for spring-redis.
memcached has function
* #param key key
* #param delta
* increment delta
* #param initValue
* the initial value to be added when value is not found
* #param timeout
* operation timeout
* #param exp
* the initial vlaue expire time, in seconds. Can be up to 30
* days. After 30 days, is treated as a unix timestamp of an
* exact date.
* #return
* #throws TimeoutException
* #throws InterruptedException
* #throws MemcachedException
*/
long incr(String key, long delta, long initValue, long timeout, int exp)
throws TimeoutException, InterruptedException, MemcachedException;
spring-redis has function
Long increment(K key, long delta);
how can i set operation timeout(not expire) in spring-redis?
spring-redis doesnt allow you to configure timeout per individual request (as in memcached realization). You can configure timeout per connection with a configuration setting in application.properties:
spring.redis.timeout=5000

Hadoop failure copying input bz2 file from s3

I have a map-only hadoop job, running on Amazon's EMR, running on the latest ami-version: 3.0.4. Once in a while I get exceptions like this:
Error: com.amazonaws.AmazonClientException: Unable to verify integrity of data download. Client calculated content length didn't match content length received from Amazon S3. The
data may be corrupt.
at com.amazonaws.util.ContentLengthValidationInputStream.validate(ContentLengthValidationInputStream.java:144)
at com.amazonaws.util.ContentLengthValidationInputStream.read(ContentLengthValidationInputStream.java:81)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.read(EmrFileSystem.java:289)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.io.DataInputStream.read(DataInputStream.java:149)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.readAByte(CBZip2InputStream.java:195)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.getAndMoveToFrontDecode(CBZip2InputStream.java:866)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.initBlock(CBZip2InputStream.java:504)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.changeStateToProcessABlock(CBZip2InputStream.java:333)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.read(CBZip2InputStream.java:423)
at org.apache.hadoop.io.compress.BZip2Codec.read(BZip2Codec.java:483)
at java.io.InputStream.read(InputStream.java:101)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:211)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:164)
at org.apache.hadoop.mapred.MapTask.nextKeyValue(MapTask.java:544)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:775)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Is there any way to cure this? Why does this happen? Is it network problem in amazon? It can't be a problem with the input file, as re-running the same job usually succeeds. Is there a way to catch this exception? Why doesn't hadoop automatically cure it?
My main class looks like this:
public class LogParserMapReduce extends Configured implements Tool {
private static final Log LOG = LogFactory.getLog(LogParserMapReduce.class);
#Override
public int run(String[] args) throws Exception {
Configuration conf = super.getConf();
conf.setBoolean("mapred.compress.map.output", true);
conf.setClass("mapred.map.output.compression.codec", GzipCodec.class, CompressionCodec.class);
conf.setBoolean("keep.failed.task.files", true);
/*
* Instantiate a Job object for your job's configuration.
*/
Job job = Job.getInstance(conf);
/*
* The expected command-line arguments are the paths containing
* input and output data. Terminate the job if the number of
* command-line arguments is not exactly 2.
*/
if (args.length != 2) {
System.out.printf("Usage: LogParserMapReduce <input dir> <output dir>\n");
System.exit(-1);
}
/*
* Specify the jar file that contains your driver, mapper, and reducer.
* Hadoop will transfer this jar file to nodes in your cluster running
* mapper and reducer tasks.
*/
job.setJarByClass(LogParserMapReduce.class);
/*
* Specify an easily-decipherable name for the job.
* This job name will appear in reports and logs.
*/
job.setJobName("LogParser");
/*
* Specify the paths to the input and output data based on the
* command-line arguments.
*/
FileInputFormat.addInputPaths(job, args[0]);
FileOutputFormat.setOutputPath(job, new Path(args[1]));
FileOutputFormat.setCompressOutput(job, true);
FileOutputFormat.setOutputCompressorClass(job, GzipCodec.class);
/*
* Specify the mapper and reducer classes.
*/
job.setMapperClass(LogParserMapper.class);
/*
* For the SysLogEvent count application, the input file and output
* files are in text format - the default format.
*
* In text format files, each record is a line delineated by a
* by a line terminator.
*
* When you use other input formats, you must call the
* SetInputFormatClass method. When you use other
* output formats, you must call the setOutputFormatClass method.
*/
/*
* For the logs count application, the mapper's output keys and
* values have the same data types as the reducer's output keys
* and values: Text and IntWritable.
*
* When they are not the same data types, you must call the
* setMapOutputKeyClass and setMapOutputValueClass
* methods.
*/
/*
* Specify the job's output key and value classes.
*/
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(Text.class);
job.setNumReduceTasks(0);
LOG.info("LogParserMapReduce: waitingForCompletion");
/*
* Start the MapReduce job and wait for it to finish.
* If it finishes successfully, return 0. If not, return 1.
*/
boolean success = job.waitForCompletion(true);
return success ? 0 : 1;
}
}
The solution was very simple (after Amazon's customer support told me): I had to upgrade to the latest AMI (currently it's 3.1.0) that has the latest Hadoop (2.4) and also make sure that I used the same hadoop version for the compilation of the Java code. Ever since I haven't see this kind of problem.

Resources