I am querying to proxy server of flink cluster which is on 127.0.1.1:9069 but not getting response for query. I am calculating sum of all inputted numbers by creating a server on 9000 port. Also I am storing the sum in Value State.
Flink Job:
private transient ValueState<Tuple2<String, Long>> sum;
#Override
public void flatMap(Tuple2<Long, Long> input, Collector<Tuple2<String,Long>> out) throws Exception {
if (input.f1==-1){
sum.clear();
return;
}
Tuple2<String, Long> currentSum = sum.value();
currentSum.f1 += input.f1;
sum.update(currentSum);
System.out.println("Current Sum: "+(sum.value().f1)+"\nCurrent Count: "+(sum.value().f0));
out.collect(new Tuple2<>("sum", sum.value().f1));
}
#Override
public void open(Configuration config) {
ValueStateDescriptor<Tuple2<String, Long>> descriptor =
new ValueStateDescriptor<>(
"sum", // the state name
TypeInformation.of(new TypeHint<Tuple2<String, Long>>() {}),
Tuple2.of("sum", 0L)); // default value of the state, if nothing was set
sum = getRuntimeContext().getState(descriptor);
}
inp.flatMap(new FlatMapFunction<String, Tuple2<Long, Long>>() {
#Override
public void flatMap(String inpstr, Collector<Tuple2<Long, Long>> out) throws Exception{
for (String word : inpstr.split("\\s")) {
try {
if(word.equals("quit")){
throw new QuitValueState( "Stoppping!!!",hostname,port);
}
if(word.equals("clear")){
word="-1";
}
out.collect(Tuple2.of(1L, Long.valueOf(word)));
}
catch ( NumberFormatException e) {
System.out.println("Enter valid number: "+e.getMessage());
}catch (QuitValueState ex){
System.out.println("Quitting!!!");
}
}
}
}).keyBy(0).flatMap(new StreamingJob())
.keyBy(0).asQueryableState("query-name");
On flink cluster I am able to see proxy server at 127.0.1.1:9069
Client side:
public static void main(String[] args) throws IOException, InterruptedException, Exception {
QueryableStateClient client = new QueryableStateClient("127.0.1.1", 9069);
System.out.println("Querying on "+args[0]);
JobID jobId = JobID.fromHexString(args[0]);
ValueStateDescriptor<Tuple2<String, Long>> descriptor =
new ValueStateDescriptor<>(
"sum",
TypeInformation.of(new TypeHint<Tuple2<String, Long>>() {
}));
CompletableFuture<ValueState<Tuple2<String, Long>>> resultFuture =
client.getKvState(jobId, "query-name", "sum", BasicTypeInfo.STRING_TYPE_INFO, descriptor);
System.out.println(resultFuture);
resultFuture.thenAccept(response -> {
try {
Tuple2<String, Long> res = response.value();
System.out.println("Queried sum value: " + res);
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("Exiting future ...");
});
}
Trying to code chat in Java, but i have a problem with inputting strings from Scanner to DataOutputStream.
Client part:
public class Client {
private static Scanner sc = new Scanner(System.in);
private static Socket client;
private static Random r = new Random();
public static int status = 0;
public static String nickname;
public static void main(String[] args) {
connect();
handle();
// end();
}
private static void connect()
{
try {
client = new Socket("localhost",9999);
DataOutputStream dos = new DataOutputStream(client.getOutputStream());
//sc - Scanner(System.in);
//sl - nickname i want to put into DataOutputStream
String sl = sc.nextLine();
sendPacket(new Greetings(sl));
} catch (IOException e) {
e.printStackTrace();
}
}
private static void sendPacket(Packet packet)
{
try {
DataOutputStream dos = new DataOutputStream(client.getOutputStream());
System.out.print(dos.size());
dos.writeInt(packet.getId());
packet.send(dos);
} catch (SocketException se) {} catch (IOException e) {
e.printStackTrace();
}
}
private static void handle()
{
}
private static void end()
{
try {
client.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
I am using package system to send information. Format of auth package is [id(int)]+[name(String)]
If i send information without scanner (like hardcoded name and id) it works.
Greetings packet code:
public class Greetings extends Packet{
private String nickname;
public Greetings(){}
public Greetings(String nickname)
{
this.nickname = nickname;
}
#Override
public int getId() {
System.out.print(nickname);
return 120;
}
#Override
public void send(DataOutputStream dos) {
System.out.print("GOT IT");
try {
dos.writeUTF(nickname);
System.out.println("Nickname "+nickname);
dos.flush();
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void receive(DataInputStream dis) {
}
}
It seems like something stupid that i can't see, and it's driving me crazy.
Server's part:
//For each new client new handler
public class ClientHandler extends Thread {
private Socket socket;
public ClientHandler(Socket socket)
{
this.socket = socket;
start();
}
public void run()
{
while (true)
{ try {
if (!isInterrupted())
{DataInputStream dis = new
DataInputStream(socket.getInputStream());
if (dis.available() > 0)
{
Packet p = PacketManager.getPacketById(dis.readInt());
p.receive(dis);
p.handle();
}} } catch (SocketException se)
{} catch (IOException e)
{} finally {
end();
}
}
}
private void end()
{
interrupt();
try {
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
It seems like smt simple i just can't see. Like, there's something different between scanner's string and usual string. But theirs hex code is the same...
Problem appears when you write both INT and STRING into dataOutuputStream.
I am using OKHTTP for networking and currently get a charStream from response.charStream() which I then pass for GSON for parsing. Once parsed and inflated, I deflate the model again to save to disk using a stream. It seems like extra work to have to go from networkReader to Model to DiskWriter. Is it possible with OKIO to instead go from networkReader to JSONParser(reader) as well as networkReader to DiskWriter(reader). Basically I want to to be able to read from the network stream twice.
You can use a MirroredSource (taken from this gist).
public class MirroredSource {
private final Buffer buffer = new Buffer();
private final Source source;
private final AtomicBoolean sourceExhausted = new AtomicBoolean();
public MirroredSource(final Source source) {
this.source = source;
}
public Source original() {
return new okio.Source() {
#Override
public long read(final Buffer sink, final long byteCount) throws IOException {
final long bytesRead = source.read(sink, byteCount);
if (bytesRead > 0) {
synchronized (buffer) {
sink.copyTo(buffer, sink.size() - bytesRead, bytesRead);
// Notfiy the mirror to continue
buffer.notify();
}
} else {
sourceExhausted.set(true);
}
return bytesRead;
}
#Override
public Timeout timeout() {
return source.timeout();
}
#Override
public void close() throws IOException {
source.close();
sourceExhausted.set(true);
synchronized (buffer) {
buffer.notify();
}
}
};
}
public Source mirror() {
return new okio.Source() {
#Override
public long read(final Buffer sink, final long byteCount) throws IOException {
synchronized (buffer) {
while (!sourceExhausted.get()) {
// only need to synchronise on reads when the source is not exhausted.
if (buffer.request(byteCount)) {
return buffer.read(sink, byteCount);
} else {
try {
buffer.wait();
} catch (final InterruptedException e) {
//No op
}
}
}
}
return buffer.read(sink, byteCount);
}
#Override
public Timeout timeout() {
return new Timeout();
}
#Override
public void close() throws IOException { /* not used */ }
};
}
}
Usage would look like:
MirroredSource mirroredSource = new MirroredSource(response.body().source()); //Or however you're getting your original source
Source originalSource = mirroredSource.original();
Source secondSource = mirroredSource.mirror();
doParsing(originalSource);
writeToDisk(secondSource);
originalSource.close();
If you want something more robust you can repurpose Relay from OkHttp.
I am trying to implement MyBatis custom type handler for File using FileInputStream.
here is my code for setting:
#MappedJdbcTypes(JdbcType.LONGVARBINARY)
public class FileByteaHandler extends BaseTypeHandler<File> {
#Override
public void setNonNullParameter(PreparedStatement ps, int i, File file, JdbcType jdbcType) throws SQLException{
try {
FileInputStream fis = new FileInputStream(file);
ps.setBinaryStream(1, fis, (int) file.length());
} catch(FileNotFoundException ex) {
Logger.getLogger(FileByteaHandler.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
My question is:
I can not close this FileInputStream at the end of this method, otherwise MyBatis will not be able to read the data from it. In fact, I do not know where I can close the FileInputStream. Is there a way to call close() after the query being excuted in MyBatis.
Thanks in advance,
UPDATE
Thanks for Jarandinor's help. Here is my code for this type handler. and hopefully it can help someone:
#MappedJdbcTypes(JdbcType.LONGVARBINARY)
public class FileByteaHandler extends BaseTypeHandler<File> {
#Override
public void setNonNullParameter(PreparedStatement ps, int i, File file, JdbcType jdbcType) throws SQLException {
try {
AutoCloseFileInputStream fis = new AutoCloseFileInputStream(file);
ps.setBinaryStream(1, fis, (int) file.length());
} catch(FileNotFoundException ex) {
Logger.getLogger(FileByteaHandler.class.getName()).log(Level.SEVERE, null, ex);
}
}
#Override
public File getNullableResult(ResultSet rs, String columnName) throws SQLException {
File file = null;
try(InputStream input = rs.getBinaryStream(columnName)) {
file = getResult(rs, input);
} catch(IOException e) {
System.out.println(e.getMessage());
}
return file;
}
public File creaetFile() {
File file = new File("e:/target-file"); //your temp file path
return file;
}
private File getResult(ResultSet rs, InputStream input) throws SQLException {
File file = creaetFile();
try(OutputStream output = new FileOutputStream(file)) {
int bufSize = 0x8000000;
byte buf[] = new byte[bufSize];
int s = 0;
int tl = 0;
while( (s = input.read(buf, 0, bufSize)) > 0 ) {
output.write(buf, 0, s);
tl += s;
}
output.flush();
} catch(IOException e) {
System.out.println(e.getMessage());
}
return file;
}
#Override
public File getNullableResult(ResultSet rs, int columnIndex) throws SQLException {
File file = null;
try(InputStream input = rs.getBinaryStream(columnIndex)) {
file = getResult(rs, input);
} catch(IOException e) {
System.out.println(e.getMessage());
}
return file;
}
#Override
public File getNullableResult(CallableStatement cs, int columnIndex) throws SQLException {
throw new SQLException("getNullableResult(CallableStatement cs, int columnIndex) is called");
}
private class AutoCloseFileInputStream extends FileInputStream {
public AutoCloseFileInputStream(File file) throws FileNotFoundException {
super(file);
}
#Override
public int read() throws IOException {
int c = super.read();
if(available() <= 0) {
close();
}
return c;
}
public int read(byte[] b) throws IOException {
int c = super.read(b);
if(available() <= 0) {
close();
}
return c;
}
public int read(byte[] b, int off, int len) throws IOException {
int c = super.read(b, off, len);
if(available() <= 0) {
close();
}
return c;
}
}
}
public AutoCloseFileInputStream(File file) throws FileNotFoundException {
super(file);
}
#Override
public int read() throws IOException {
int c = super.read();
if( c == -1 ) {
close();
}
return c;
}
public int read(byte[] b) throws IOException {
int c = super.read(b);
if( c == -1 ) {
close();
}
return c;
}
public int read(byte[] b, int off, int len) throws IOException {
int c = super.read(b, off, len);
if(available() <= 0) {
close();
}
return c;
}
}
I don't know a good way to close stream after query execution.
Method 1:
read the file to byte []
(note: in jdk 7 you can use Files.readAllBytes(Paths.get(file.getPath()));)
and use:
ps.setBytes(i, bytes);
2: or create your own class inherited from FileInputStream and override public native int read() throws IOException; method, when the end of the file is reached, close the stream:
#Override
public int read() throws IOException {
int c = super.read();
if(c == -1) {
super.close();
}
return c;
}
Maybe you should override and public int read(byte[] b) throws IOException too,
it's depends on the jdbc implementation.
3: you can change your FileByteaHandler:
1) add list of FileInputStream field;
2) put opened InputStream to that list in setNonNullParameter;
3) add closeStreams() method, where you close and remove all InputStream from list.
And invoke this method after you have invoked your mapper method: session.getConfiguration().getTypeHandlerRegistry().getMappingTypeHandler(FileByteaHandler.class).closeStreams();
Or use mybatis plugin system to run above command.
I have this hadoop map reduce code that works on graph data (in adjacency list form) and kind of similar to in-adjacency list to out-adjacency list transformation algorithms. The main MapReduce Task code is following:
public class TestTask extends Configured
implements Tool {
public static class TTMapper extends MapReduceBase
implements Mapper<Text, TextArrayWritable, Text, NeighborWritable> {
#Override
public void map(Text key,
TextArrayWritable value,
OutputCollector<Text, NeighborWritable> output,
Reporter reporter) throws IOException {
int numNeighbors = value.get().length;
double weight = (double)1 / numNeighbors;
Text[] neighbors = (Text[]) value.toArray();
NeighborWritable me = new NeighborWritable(key, new DoubleWritable(weight));
for (int i = 0; i < neighbors.length; i++) {
output.collect(neighbors[i], me);
}
}
}
public static class TTReducer extends MapReduceBase
implements Reducer<Text, NeighborWritable, Text, Text> {
#Override
public void reduce(Text key,
Iterator<NeighborWritable> values,
OutputCollector<Text, Text> output,
Reporter arg3)
throws IOException {
ArrayList<NeighborWritable> neighborList = new ArrayList<NeighborWritable>();
while(values.hasNext()) {
neighborList.add(values.next());
}
NeighborArrayWritable neighbors = new NeighborArrayWritable
(neighborList.toArray(new NeighborWritable[0]));
Text out = new Text(neighbors.toString());
output.collect(key, out);
}
}
#Override
public int run(String[] arg0) throws Exception {
JobConf conf = Util.getMapRedJobConf("testJob",
SequenceFileInputFormat.class,
TTMapper.class,
Text.class,
NeighborWritable.class,
1,
TTReducer.class,
Text.class,
Text.class,
TextOutputFormat.class,
"test/in",
"test/out");
JobClient.runJob(conf);
return 0;
}
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new TestTask(), args);
System.exit(res);
}
}
The auxiliary code is following:
TextArrayWritable:
public class TextArrayWritable extends ArrayWritable {
public TextArrayWritable() {
super(Text.class);
}
public TextArrayWritable(Text[] values) {
super(Text.class, values);
}
}
NeighborWritable:
public class NeighborWritable implements Writable {
private Text nodeId;
private DoubleWritable weight;
public NeighborWritable(Text nodeId, DoubleWritable weight) {
this.nodeId = nodeId;
this.weight = weight;
}
public NeighborWritable () { }
public Text getNodeId() {
return nodeId;
}
public DoubleWritable getWeight() {
return weight;
}
public void setNodeId(Text nodeId) {
this.nodeId = nodeId;
}
public void setWeight(DoubleWritable weight) {
this.weight = weight;
}
#Override
public void readFields(DataInput in) throws IOException {
nodeId = new Text();
nodeId.readFields(in);
weight = new DoubleWritable();
weight.readFields(in);
}
#Override
public void write(DataOutput out) throws IOException {
nodeId.write(out);
weight.write(out);
}
public String toString() {
return "NW[nodeId=" + (nodeId != null ? nodeId.toString() : "(null)") +
",weight=" + (weight != null ? weight.toString() : "(null)") + "]";
}
public boolean equals(Object o) {
if (!(o instanceof NeighborWritable)) {
return false;
}
NeighborWritable that = (NeighborWritable)o;
return (nodeId.equals(that.getNodeId()) && (weight.equals(that.getWeight())));
}
}
and the Util class:
public class Util {
public static JobConf getMapRedJobConf(String jobName,
Class<? extends InputFormat> inputFormatClass,
Class<? extends Mapper> mapperClass,
Class<?> mapOutputKeyClass,
Class<?> mapOutputValueClass,
int numReducer,
Class<? extends Reducer> reducerClass,
Class<?> outputKeyClass,
Class<?> outputValueClass,
Class<? extends OutputFormat> outputFormatClass,
String inputDir,
String outputDir) throws IOException {
JobConf conf = new JobConf();
if (jobName != null)
conf.setJobName(jobName);
conf.setInputFormat(inputFormatClass);
conf.setMapperClass(mapperClass);
if (numReducer == 0) {
conf.setNumReduceTasks(0);
conf.setOutputKeyClass(outputKeyClass);
conf.setOutputValueClass(outputValueClass);
conf.setOutputFormat(outputFormatClass);
} else {
// may set actual number of reducers
// conf.setNumReduceTasks(numReducer);
conf.setMapOutputKeyClass(mapOutputKeyClass);
conf.setMapOutputValueClass(mapOutputValueClass);
conf.setReducerClass(reducerClass);
conf.setOutputKeyClass(outputKeyClass);
conf.setOutputValueClass(outputValueClass);
conf.setOutputFormat(outputFormatClass);
}
// delete the existing target output folder
FileSystem fs = FileSystem.get(conf);
fs.delete(new Path(outputDir), true);
// specify input and output DIRECTORIES (not files)
FileInputFormat.addInputPath(conf, new Path(inputDir));
FileOutputFormat.setOutputPath(conf, new Path(outputDir));
return conf;
}
}
My input is following graph: (in binary format, here I am giving the text format)
1 2
2 1,3,5
3 2,4
4 3,5
5 2,4
According to the logic of the code the output should be:
1 NWArray[size=1,{NW[nodeId=2,weight=0.3333333333333333],}]
2 NWArray[size=3,{NW[nodeId=5,weight=0.5],NW[nodeId=3,weight=0.5],NW[nodeId=1,weight=1.0],}]
3 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=4,weight=0.5],}]
4 NWArray[size=2,{NW[nodeId=5,weight=0.5],NW[nodeId=3,weight=0.5],}]
5 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=4,weight=0.5],}]
But the output is coming as:
1 NWArray[size=1,{NW[nodeId=2,weight=0.3333333333333333],}]
2 NWArray[size=3,{NW[nodeId=5,weight=0.5],NW[nodeId=5,weight=0.5],NW[nodeId=5,weight=0.5],}]
3 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=2,weight=0.3333333333333333],}]
4 NWArray[size=2,{NW[nodeId=5,weight=0.5],NW[nodeId=5,weight=0.5],}]
5 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=2,weight=0.3333333333333333],}]
I cannot understand the reason why the expected output is not coming out. Any help will be appreciated.
Thanks.
You're falling foul of object re-use
while(values.hasNext()) {
neighborList.add(values.next());
}
values.next() will return the same object reference, but the underlying contents of that object will change for each iteration (the readFields method is called to re-populate the contents)
Suggest you amend to (you'll need to obtain the Configuration conf variable from a setup method, unless you can obtain it from the Reporter or OutputCollector - sorry i don't use the old API)
while(values.hasNext()) {
neighborList.add(
ReflectionUtils.copy(conf, values.next(), new NeighborWritable());
}
But I still can't understand why my unit test passed then. Here is the code -
public class UWLTInitReducerTest {
private Text key;
private Iterator<NeighborWritable> values;
private NeighborArrayWritable nodeData;
private TTReducer reducer;
/**
* Set up the states for calling the map function
*/
#Before
public void setUp() throws Exception {
key = new Text("1001");
NeighborWritable[] neighbors = new NeighborWritable[4];
for (int i = 0; i < 4; i++) {
neighbors[i] = new NeighborWritable(new Text("300" + i), new DoubleWritable((double) 1 / (1 + i)));
}
values = Arrays.asList(neighbors).iterator();
nodeData = new NeighborArrayWritable(neighbors);
reducer = new TTReducer();
}
/**
* Test method for InitModelMapper#map - valid input
*/
#Test
public void testMapValid() {
// mock the output object
OutputCollector<Text, UWLTNodeData> output = mock(OutputCollector.class);
try {
// call the API
reducer.reduce(key, values, output, null);
// in order (sequential) verification of the calls to output.collect()
verify(output).collect(key, nodeData);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Why didn't this code catch the bug?