How to using same TCP connection for http2.0 and async in okhttp3? - okhttp

Move from https://github.com/square/okhttp/issues/6051
This is a problem with okhttp keepalived connections, I wonder how to deal with this issue.
OkHttpClient httpClient = new OkHttpClient.Builder()
.sslSocketFactory(factory, (X509TrustManager) TRUST_MANAGERS[0])
.hostnameVerifier((s, sslSession) -> true)
.protocols(Arrays.asList(Protocol.HTTP_2, Protocol.HTTP_1_1))
.retryOnConnectionFailure(true)
.connectionPool(new ConnectionPool(5, 60, TimeUnit.SECONDS))
.build();
Request request = new Request.Builder()
.url("https://example.com/")
.build();
int count = 2;
long startTime = System.currentTimeMillis();
final CountDownLatch countDownLatch = new CountDownLatch(count);
for (int i = 1; i <= count; ++i) {
httpClient.newCall(request).enqueue(new Callback() {
#Override
public void onFailure(Call call, IOException e) {
System.out.println(Thread.currentThread().getName() + "-failed-" + e.getMessage());
finish();
}
#Override
public void onResponse(Call call, Response response) throws IOException {
finish();
}
private void finish() {
countDownLatch.countDown();
}
});
}
countDownLatch.await();
System.out.println("Time: " + (System.currentTimeMillis() - startTime) );
System.exit(0);
}

Use LoggingEventListener if you want to see what is going on.
Generally when we first connect we can't know ahead of time that a single connection will be http/2 and able to serve both requests. So when the connection pool is empty, OkHttp will establish 2 connections, discover they are both http/2 and then disconnect one.
So likely both requests use a single connection, but you see two established.
You have a few options
1) Don't worry, let it create two connections temporarily, and OkHttp will close one as soon as it realises it has two HTTP/2 connections.
2) Use a warmup request to create the first request, then only create secondary requests to the same host once you have received a connectionAcquired event on that first request.
But I'd encourage you not to worry and just let OkHttp do it's thing.

Related

Performance Testing Java Http and Kafka is while loop slowing performance

I've set up a spring-reactive server and an use a while loop inside of a Scheduled job to make fire and forget requests from a httpClient (backend just returns the same simple string via tomcat) it is configured as follows:
public ConnectionProvider getConnectionProvider(){
int maxConnections = 20;
ConnectionProvider connProvider = ConnectionProvider
.builder("webclient-conn-pool")
.maxConnections(maxConnections)
.maxIdleTime(Duration.of(20, ChronoUnit.SECONDS))
.maxLifeTime(Duration.of(1000, ChronoUnit.SECONDS))
.pendingAcquireMaxCount(2000)
.pendingAcquireTimeout(Duration.ofMillis(20000))
.build();
return connProvider;
}
public HttpClient getHttpClient(){
return HttpClient
.create(getConnectionProvider());
//.secure(sslContextSpec -> sslContextSpec.sslContext(webClientSslHelper.getSslContext()))
/*.tcpConfiguration(tcpClient -> {
LoopResources loop = LoopResources.create("webclient-event-loop",
selectorThreadCount, workerThreadCount, Boolean.TRUE);
return tcpClient
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 10000)
.option(ChannelOption.TCP_NODELAY, true);*/
}
I've also created a job that uses #Autowired kafkaTemplate and makes a while loop that sends a short text string
For http / tomcat calls I am getting around 65 requests a second
For kafka I am maxing out around 50000 requests with a half second lag
application.props
spring.kafka.bootstrap-servers=PLAINTEXT://localhost:9092,PLAINTEXT://localhost:9093
host.name=localhost
Job to make calls
#Async
#Scheduled(fixedDelay = 15000)
public void scheduleTaskUsingCronExpression() {
generateCalls();
}
private void generateCalls() {
try{
int i = 0;
System.out.println("start");
long startTime = System.currentTimeMillis();
while(i <= 5000){
//Thread.sleep(5);
String message = "Test Message sadg sad-";
kafkaTemplate.send(TOPIC, message + i);
i++;
}
long endTime = System.currentTimeMillis();
System.out.println((endTime - startTime));
System.out.println("done");
}
catch(Exception e){
e.printStackTrace();
}
System.out.println("RUNNING");
}
Kakfa partition config
#Bean
public KafkaAdmin kafkaAdmin() {
//String bootstrapAddress = "localhost:29092";
String bootstrapAddress = "localhost:9092";
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
return new KafkaAdmin(configs);
}
#Bean
public NewTopic testTopic() {
return new NewTopic("test-topic", 6, (short) 1);
}
Kafka consumer consuming messge
#KafkaListener(topics = "test-topic", groupId = "one", concurrency = "6" )
public void listenGroupFoo(String message) {
if(message.indexOf("-0") != -1){
startTime = new Date().getTime();
System.out.println("Starting Message in group foo: " + message);
}
else if(message.indexOf("-100000") != -1){
endTime = new Date().getTime();
System.out.println("Received Message in group foo: " + message);
System.out.println(endTime - startTime);
}
}
For hardware I have a 10900k with 64gb ram
5ghz clock speed
970 Evo single nvme disk
10 core 20 thread
All requests are from the same local server to the same local server
I can add more local servers for testing if needed
Is there a better way to organize / optimize the code to make a massive number of requests?
Theories:
Multiple Threads?
Changing configurations of servers such as tomcat configs (receiving or sending side)?
Not use the kafkaTemplate that is autowired or creating multiple?
Modify Hardware to have multiple disks?
Improve server receiving http to allow more connections?
Not use a job to make the requests?
Anything else anyone can think of to help?

WMQ connection socket constantly closed between v9_client and v6_server

We have a standalone java application using third-party tool to manage connection pooling, which worked for us in v6_client + v6_server setup for a long time.
Now we are trying to migrate from v6 to v9 (yes, we are pretty late to the party.....), and found v9_client connection to v6_server connection is constantly interrupted, meaning:
Socket created by XAQueueConnectionFactory#createXAConnection() is always closed immediately, and the created XAConnection seems to be unaware of it.
Due to socket close mentioned above, XASession created from the XAConnection.createXASession() always creates a new socket and close the socket after XASession.close().
We went throught the complete list of tunables for v9_client (XAQCF
column in https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.ref.dev.doc/q111800_.html) and only spot two potential new configs we haven't used in v6_client, SHARECONVALLOWED and PROVIDERVERSION. Unfortunately neither helps us out..... Specifically:
We tried setShareConvAllowed(WMQConstants.WMQ_SHARE_CONV_ALLOWED_[YES/NO]) Considering there's no SHARECNV property in v6_server side, not a surprise.
We tried "Migration/Restricted/Normal Mode" by setProviderVersion("[6/7/8") ([7/8] throws exceptions, as expected....).
Just wondering if anybody else had similar experience and could share some insights. We tried v9_server+v9_client and haven't seen any similar problem, so that could also be our eventual solution.....
Btw, the WMQ is hosted on linux (RedHat), and we only use products of MQXAQueueConnectionFactory on client side (ibm mq class for jms).
Thanks.
Additional details/updates.
[update-1]
--[playgrond setup]
v9_client jars:
javax.jms-api-2.0.jar
com.ibm.mq.allclient(-9.0.0.[1/5]).jar
v6_client jars:
in addition to v9_client jars, introduced the following jars in eclipse classpath
com.ibm.dhbcore-1.0.jar
com.ibm.mq.pcf-6.0.3.jar
com.ibm.mqjms-6.0.2.2.jar
com.ibm.mq-6.0.2.2.jar
com.ibm.mqetclient-6.0.2.2.jar
connector.jar
jta-1.1.jar
Testing code - single thread:
import javax.jms.*;
import com.ibm.mq.jms.*;
import com.ibm.msg.client.wmq.WMQConstants;
public class MQSeries_simpleAuditQ {
private static String queueManager = "QM.RCTQ.ALL.01";
private static String host = "localhost";
private static int port = 26005;
public static void main(String[] args) throws JMSException {
MQXAQueueConnectionFactory queueFactory= new MQXAQueueConnectionFactory();
System.out.println("\n\n\n*******************\nqueueFactory implementation version: " +
queueFactory.getClass().getPackage().getImplementationVersion() + "*****************\n\n\n");
queueFactory.setHostName(host);
queueFactory.setPort(port);
queueFactory.setQueueManager(queueManager);
queueFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
if (queueFactory.getClass().getPackage().getImplementationVersion().split("\\.")[0].equals("9")) {
queueFactory.setProviderVersion("6");
//queueFactory.setShareConvAllowed(WMQConstants.WMQ_SHARE_CONV_ALLOWED_YES);
}
XASession xaSession;
javax.jms.QueueConnection xaQueueConnection;
try {
// Obtain a QueueConnection
System.out.println("Creating Connection...");
xaQueueConnection = (QueueConnection)queueFactory.createXAConnection(" ", "");
xaQueueConnection.start();
for (int counter=0; counter<2; counter++) {
try {
xaSession = ((XAConnection)xaQueueConnection).createXASession();
xaSession.close();
} catch (Exception ex) {
System.out.println(ex);
}
}
System.out.println("Closing connection.... ");
xaQueueConnection.close();
} catch (Exception e) {
System.out.println("Error processing " + e.getMessage());
}
}
}
--[observations]
v6_client only created and close a single socket, while v9_client (both 9.0.0.[1/5]):
socket created and closed right after xaQueueConnection = (QueueConnection)queueFactory.createXAConnection(" ", "");
in the inner for loop, socket created right after xaSession = ((XAConnection)xaQueueConnection).createXASession();, and closed after xaSession.close();
Naively i was expecting socket remains open until xaQueueConnection.close().
[update-2]
Using queueFactory.setProviderVersion("9"); and queueFactory.setShareConvAllowed(WMQConstants.WMQ_SHARE_CONV_ALLOWED_YES); for v9_server+v9_client, we don't see the same constant socket close issue in v6_server+v9_client, which is a good news.
[update-3] MCAUSER on attribute for all SVRCONN channel on v6_server. Same on v9_server (which doesn't have the same socket close problem when connected with the same v9_client).
display channel (SYSTEM.ADMIN.SVRCONN)
MCAUSER(mqm)
display channel (SYSTEM.AUTO.SVRCONN)
MCAUSER( )
display channel (SYSTEM.DEF.SVRCONN)
MCAUSER( )
[update-4]
I tried setting MCAUSER() to mqm, then using both (blank) and mqm from client side, both can create connections, but still seeing the same unexpected socket close using v9_client+v6_user. After updating MCAUSER(), i always added refresh security, and restart the qmgr.
I also tried setting system variable to blank in eclipse before creating the connection using blank user, didn't help either.
[update-5]
Limiting our discussion to v9_client+v9_server. The async testing code below generates a ton of xasession create/close request, using limited number of existing connections. Using SHARECNV(1) we would also end up with unaffordable high TIME_WAIT count, but using larger than 1 SHARECNV (eg. 10) might introduce extra performance penalty......
Async testing code
import java.util.ArrayList;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.FutureTask;
import javax.jms.*;
import com.ibm.mq.jms.*;
import com.ibm.msg.client.wmq.WMQConstants;
public class MQSeries_simpleAuditQ_Async_v9 {
private static String queueManager = "QM.ALPQ.ALL.01";
private static int port = 26010;
private static String host = "localhost";
private static int connCount = 20;
private static int amp = 100;
private static ExecutorService amplifier = Executors.newFixedThreadPool(amp);
public static void main(String[] args) throws JMSException {
MQXAQueueConnectionFactory queueFactory= new MQXAQueueConnectionFactory();
System.out.println("\n\n\n*******************\nqueueFactory implementation version: " +
queueFactory.getClass().getPackage().getImplementationVersion() + "*****************\n\n\n");
queueFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
if (queueFactory.getClass().getPackage().getImplementationVersion().split("\\.")[0].equals("9")) {
queueFactory.setProviderVersion("9");
queueFactory.setShareConvAllowed(WMQConstants.WMQ_SHARE_CONV_ALLOWED_YES);
}
queueFactory.setHostName(host);
queueFactory.setPort(port);
queueFactory.setQueueManager(queueManager);
//queueFactory.setChannel("");
ArrayList<QueueConnection> xaQueueConnections = new ArrayList<QueueConnection>();
try {
// Obtain a QueueConnection
System.out.println("Creating Connection...");
//System.setProperty("user.name", "mqm");
//System.out.println("system username: " + System.getProperty("user.name"));
for (int ct=0; ct<connCount; ct++) {
// xaQueueConnection instance of MQXAQueueConnection
QueueConnection xaQueueConnection = (QueueConnection)queueFactory.createXAConnection(" ", "");
xaQueueConnection.start();
xaQueueConnections.add(xaQueueConnection);
}
ArrayList<Double> totalElapsedTimeRecord = new ArrayList<Double>();
ArrayList<FutureTask<Double>> taskBuffer = new ArrayList<FutureTask<Double>>();
for (int loop=0; loop <= 10; loop++) {
try {
for (int i=0; i<amp; i++) {
int idx = (int)(Math.random()*((connCount)));
System.out.println("Using connection: " + idx);
FutureTask<Double> xaSessionPoker = new FutureTask<Double>(new XASessionPoker(xaQueueConnections.get(idx)));
amplifier.submit(xaSessionPoker);
taskBuffer.add(xaSessionPoker);
}
System.out.println("loop " + loop + " completed");
} catch (Exception ex) {
System.out.println(ex);
}
}
for (FutureTask<Double> xaSessionPoker : taskBuffer) {
totalElapsedTimeRecord.add(xaSessionPoker.get());
}
System.out.println("Average xaSession poking time: " + calcAverage(totalElapsedTimeRecord));
System.out.println("Closing connections.... ");
for (QueueConnection xaQueueConnection : xaQueueConnections) {
xaQueueConnection.close();
}
} catch (Exception e) {
System.out.println("Error processing " + e.getMessage());
}
amplifier.shutdown();
}
private static double calcAverage(ArrayList<Double> myArr) {
double sum = 0;
for (Double val : myArr) {
sum += val;
}
return sum/myArr.size();
}
// create and close session through QueueConnection object passed in.
private static class XASessionPoker implements Callable<Double> {
// conn instance of MQXAQueueConnection. ref. QueueProviderService
private QueueConnection conn;
XASessionPoker(QueueConnection conn) {
this.conn = conn;
}
#Override
public Double call() throws Exception {
XASession xaSession;
double elapsed = 0;
try {
final long start = System.currentTimeMillis();
// ref. DualSessionWrapper
// xaSession instance of MQXAQueueSession
xaSession = ((XAConnection) conn).createXASession();
xaSession.close();
final long end = System.currentTimeMillis();
elapsed = (end - start) / 1000.0;
} catch (Exception e) {
// TODO Auto-generated catch block
System.out.println(e);
}
return elapsed;
}
}
}
We found the root cause is a combination of no more session pooling + bitronix TM doesn't offer session pooling across TX. Specifically (in our case), bitronix manages JmsPooledConnection pooling, but everytime a (xa)session is used (under JmsPooledConnection), a new socket is created (createXASession()) and closed (xaSession.close()).
One solution, is to wrap the jms connection with (xa)session pool, similar to what's been done in https://github.com/messaginghub/pooled-jms/tree/master/pooled-jms/src/main/java/org/messaginghub/pooled/jms
http://bjansen.github.io/java/2018/03/04/high-performance-mq-jms.html also suggests Spring CachingConnectionFactory works well, which sounds like a speical case of the first solution.

Tryus websocket client - onMessage does not get called although connection is succesful

I am successfully connecting to a local websocket server with tyrus, but the onMessage method does not get called. I setup Fiddler as proxy in between and I see that the server responds with two messages, however, they are not printed out in my code. I more or less adapted the sampe code:
The onOpen Message is printed out
public static void createAndConnect(String channel) {
CountDownLatch messageLatch;
try {
messageLatch = new CountDownLatch(1);
final ClientEndpointConfig cec = ClientEndpointConfig.Builder.create().build();
ClientManager client = ClientManager.createClient();
client.connectToServer(new Endpoint() {
#Override
public void onOpen(Session session, EndpointConfig config) {
System.out.println("On Open and is Open " + session.isOpen());
session.addMessageHandler((Whole<String>) message -> {
System.out.println("Received message: " + message);
messageLatch.countDown();
});
}
}, cec, new URI("ws://192.168.1.248/socket.io/1/websocket/" + channel));
messageLatch.await(5, TimeUnit.SECONDS); //I also tried increasing timeout to 30sec, doesn't help
} catch (Exception e) {
e.printStackTrace();
}
}
That's a known issue - it will work if you rewrite lambda to anonymous class or use Session#addMessageHandler(Class, MessageHandler) (you can use lambdas here).

CloseableHttpClient.execute freezes once every few weeks despite timeouts

We have a groovy singleton that uses PoolingHttpClientConnectionManager(httpclient:4.3.6) with a pool size of 200 to handle very high concurrent connections to a search service and processes the xml response.
Despite having specified timeouts, it freezes about once a month but runs perfectly fine the rest of the time.
The groovy singleton below. The method retrieveInputFromURL seems to block on client.execute(get);
#Singleton(strict=false)
class StreamManagerUtil {
// Instantiate once and cache for lifetime of Signleton class
private static PoolingHttpClientConnectionManager connManager = new PoolingHttpClientConnectionManager();
private static CloseableHttpClient client;
private static final IdleConnectionMonitorThread staleMonitor = new IdleConnectionMonitorThread(connManager);
private int warningLimit;
private int readTimeout;
private int connectionTimeout;
private int connectionFetchTimeout;
private int poolSize;
private int routeSize;
PropertyManager propertyManager = PropertyManagerFactory.getInstance().getPropertyManager("sebe.properties")
StreamManagerUtil() {
// Initialize all instance variables in singleton from properties file
readTimeout = 6
connectionTimeout = 6
connectionFetchTimeout =6
// Pooling
poolSize = 200
routeSize = 50
// Connection pool size and number of routes to cache
connManager.setMaxTotal(poolSize);
connManager.setDefaultMaxPerRoute(routeSize);
// ConnectTimeout : time to establish connection with GSA
// ConnectionRequestTimeout : time to get connection from pool
// SocketTimeout : waiting for packets form GSA
RequestConfig config = RequestConfig.custom()
.setConnectTimeout(connectionTimeout * 1000)
.setConnectionRequestTimeout(connectionFetchTimeout * 1000)
.setSocketTimeout(readTimeout * 1000).build();
// Keep alive for 5 seconds if server does not have keep alive header
ConnectionKeepAliveStrategy myStrategy = new ConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
HeaderElementIterator it = new BasicHeaderElementIterator
(response.headerIterator(HTTP.CONN_KEEP_ALIVE));
while (it.hasNext()) {
HeaderElement he = it.nextElement();
String param = he.getName();
String value = he.getValue();
if (value != null && param.equalsIgnoreCase
("timeout")) {
return Long.parseLong(value) * 1000;
}
}
return 5 * 1000;
}
};
// Close all connection older than 5 seconds. Run as separate thread.
staleMonitor.start();
staleMonitor.join(1000);
client = HttpClients.custom().setDefaultRequestConfig(config).setKeepAliveStrategy(myStrategy).setConnectionManager(connManager).build();
}
private retrieveInputFromURL (String categoryUrl, String xForwFor, boolean isXml) throws Exception {
URL url = new URL( categoryUrl );
GPathResult searchResponse = null
InputStream inputStream = null
HttpResponse response;
HttpGet get;
try {
long startTime = System.nanoTime();
get = new HttpGet(categoryUrl);
response = client.execute(get);
int resCode = response.getStatusLine().getStatusCode();
if (xForwFor != null) {
get.setHeader("X-Forwarded-For", xForwFor)
}
if (resCode == HttpStatus.SC_OK) {
if (isXml) {
extractXmlString(response)
} else {
StringBuffer buffer = buildStringFromResponse(response)
return buffer.toString();
}
}
}
catch (Exception e)
{
throw e;
}
finally {
// Release connection back to pool
if (response != null) {
EntityUtils.consume(response.getEntity());
}
}
}
private extractXmlString(HttpResponse response) {
InputStream inputStream = response.getEntity().getContent()
XmlSlurper slurper = new XmlSlurper()
slurper.setFeature("http://xml.org/sax/features/validation", false)
slurper.setFeature("http://apache.org/xml/features/disallow-doctype-decl", false)
slurper.setFeature("http://apache.org/xml/features/nonvalidating/load-dtd-grammar", false)
slurper.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false)
return slurper.parse(inputStream)
}
private StringBuffer buildStringFromResponse(HttpResponse response) {
StringBuffer buffer= new StringBuffer();
BufferedReader rd = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
String line = "";
while ((line = rd.readLine()) != null) {
buffer.append(line);
System.out.println(line);
}
return buffer
}
public class IdleConnectionMonitorThread extends Thread {
private final HttpClientConnectionManager connMgr;
private volatile boolean shutdown;
public IdleConnectionMonitorThread
(PoolingHttpClientConnectionManager connMgr) {
super();
this.connMgr = connMgr;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(5000);
connMgr.closeExpiredConnections();
connMgr.closeIdleConnections(10, TimeUnit.SECONDS);
}
}
} catch (InterruptedException ex) {
// Ignore
}
}
public void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}
}
I also found found this in the log leading me to believe it happened on waiting for response data
java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
Findings thus far:
We are using java 1.8u25. There is an open issue on a similar scenario
https://bugs.openjdk.java.net/browse/JDK-8075484
HttpClient had a similar report https://issues.apache.org/jira/browse/HTTPCLIENT-1589 but this was fixed in
the 4.3.6 version we are using
Questions
Can this be a synchronisation issue? From my understanding even though the singleton is accessed by multiple threads, the only shared data is the cached CloseableHttpClient
Is there anything else fundamentally wrong with this code,approach that may be causing this behaviour?
I do not see anything obviously wrong with your code. I would strongly recommend setting SO_TIMEOUT parameter on the connection manager, though, to make sure it applies to all new socket at the creation time, not at the time of request execution.
I would also help to know what exactly 'freezing' means. Are worker threads getting blocked waiting to acquire connections from the pool or waiting for response data?
Please also note that worker threads can appear 'frozen' if the server keeps on sending bits of chunk coded data. As usual a wire / context log of the client session would help a lot
http://hc.apache.org/httpcomponents-client-4.3.x/logging.html

How An ISO server can support concurrent requests?

I had implemented ISO SERVER by using ASCII channel and ASCII packager and listening on a port and giving response to ISO requests.
how can i make my server that accepts concurrent requests and send the response.
Please
if you are using Q2, just deploy QServer and set the minSessions and maxSessions which its default value is 0 and 100.
here example jPOS server that handle concurent request:
http://didikhari.web.id/java/jpos-client-receive-response-specific-port/
ISOServer works with a threadpool, so you can accept concurrent requests out of the box. Every socket connection is handled by its own thread. So, I think all you have to do is assign a ISORequestListener to your ISOServer to actually process your incoming messages.
Here's a test program taken from the jPOS guide:
public class Test implements ISORequestListener {
public Test () {
super();
}
public boolean process (ISOSource source, ISOMsg m) {
try {
m.setResponseMTI ();
m.set (39, "00");
source.send (m);
} catch (ISOException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return true;
}
public static void main (String[] args) throws Exception {
Logger logger = new Logger ();
logger.addListener (new SimpleLogListener (System.out));
ServerChannel channel = new XMLChannel (new XMLPackager());
((LogSource)channel).setLogger (logger, "channel");
ISOServer server = new ISOServer (8000, channel, null);
server.setLogger (logger, "server");
server.addISORequestListener (new Test ());
new Thread (server).start ();
}
}

Resources