Create AIScene instance from the file's content - spring

I'm writing a Java web service where it is possible to upload a 3D object, operate on it and store it.
What I'm trying to do is creating an AIScene instance using a byte[] as an input parameter which is the file itself (it's content).
I have found no way to do this in the docs, all import methods require a path.
Right now I'm taking a look at both the lwjgl java version of Assimp as well as the C++ version. It doesn't matter which one is used to solve the issue.
Edit: the code I'm trying to get done:
#Override
public String uploadFile(MultipartFile file) {
AIFileIO fileIo = AIFileIO.create();
AIFileOpenProcI fileOpenProc = new AIFileOpenProc() {
public long invoke(long pFileIO, long fileName, long openMode) {
AIFile aiFile = AIFile.create();
final ByteBuffer data;
try {
data = ByteBuffer.wrap(file.getBytes());
} catch (IOException e) {
throw new RuntimeException();
}
AIFileReadProcI fileReadProc = new AIFileReadProc() {
public long invoke(long pFile, long pBuffer, long size, long count) {
long max = Math.min(data.remaining(), size * count);
memCopy(memAddress(data) + data.position(), pBuffer, max);
return max;
}
};
AIFileSeekI fileSeekProc = new AIFileSeek() {
public int invoke(long pFile, long offset, int origin) {
if (origin == Assimp.aiOrigin_CUR) {
data.position(data.position() + (int) offset);
} else if (origin == Assimp.aiOrigin_SET) {
data.position((int) offset);
} else if (origin == Assimp.aiOrigin_END) {
data.position(data.limit() + (int) offset);
}
return 0;
}
};
AIFileTellProcI fileTellProc = new AIFileTellProc() {
public long invoke(long pFile) {
return data.limit();
}
};
aiFile.ReadProc(fileReadProc);
aiFile.SeekProc(fileSeekProc);
aiFile.FileSizeProc(fileTellProc);
return aiFile.address();
}
};
AIFileCloseProcI fileCloseProc = new AIFileCloseProc() {
public void invoke(long pFileIO, long pFile) {
/* Nothing to do */
}
};
fileIo.set(fileOpenProc, fileCloseProc, NULL);
AIScene scene = aiImportFileEx(file.getName(),
aiProcess_JoinIdenticalVertices | aiProcess_Triangulate, fileIo); // ISSUE HERE. file.getName() is not a path, just a name. so is getOriginalName() in my case.
try{
Long id = scene.mMeshes().get(0);
AIMesh mesh = AIMesh.create(id);
AIVector3D vertex = mesh.mVertices().get(0);
return mesh.mName().toString() + ": " + (vertex.x() + " " + vertex.y() + " " + vertex.z());
}catch(Exception e){
e.printStackTrace();
}
return "fail";
}
When debugging the method I get an access violation in the method that binds to the native:
public static long naiImportFileEx(long pFile, int pFlags, long pFS)
this is the message:
#
A fatal error has been detected by the Java Runtime Environment:
#
EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000000007400125d, pid=6400, tid=0x0000000000003058
#
JRE version: Java(TM) SE Runtime Environment (8.0_201-b09) (build 1.8.0_201-b09)
Java VM: Java HotSpot(TM) 64-Bit Server VM (25.201-b09 mixed mode windows-amd64 compressed oops)
Problematic frame:
V [jvm.dll+0x1e125d]
#
Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#
An error report file with more information is saved as:
C:\Users\ragos\IdeaProjects\objectstore3d\hs_err_pid6400.log
#
If you would like to submit a bug report, please visit:
http://bugreport.java.com/bugreport/crash.jsp
#

It is possible if we use the aiImportFileFromMemory method.
The approach I wanted to follow was copied from a github demo and actually copies the buffer around unnecessarily.
The reason for the access violation was the use of indirect buffers (for more info why that is a problem, check this out).
The solution is not nearly as complicated as the code I initially pasted:
#Override
public String uploadFile(MultipartFile file) throws IOException {
ByteBuffer buffer = BufferUtils.createByteBuffer((int) file.getSize());
buffer.put(file.getBytes());
buffer.flip();
AIScene scene = Assimp.aiImportFileFromMemory(buffer,aiProcess_Triangulate, (ByteBuffer) null);
Long id = scene.mMeshes().get(0);
AIMesh mesh = AIMesh.create(id);
AIVector3D vertex = mesh.mVertices().get(0);
return mesh.mName().dataString() + ": " + (vertex.x() + " " + vertex.y() + " " + vertex.z());
}
Here I create a direct buffer with the appropriate size, load the data and flip it (this part is a must.) After that let Assimp do its magic so you get pointers to the structure. With the return statement I just check if I got the valid data.
edit
As in the comments it was pointed out, this implementation is limited to a single file upload and assumes it gets everything that is necessary from that one MultipartFile, it won't work well with referenced formats. See docs for more detail.
The demo that was linked in the question's comments which was used in the question as a base has a different use case to my original one.

Related

Rdlc report unable to parse regular expression

I am using the .net framework 4.7.2. And In a web API project with Ninject dependency injection, I am trying to generate an rdlc report with an expression. The problem is the report is generating perfectly when I am not using any expression but everything goes wrong when using an expression. Just a normal sum expression of two fields of a dataset.
=Fields!AdditionalPremiumDeposit.Value + Fields!RefundOfPremium.Value
The two fields are dataset field(decimal)
Whenever I am trying to generate a report I am getting an error, below ->
Failed to load expression host assembly. Details: Type 'Ninject.Web.WebApi.NinjectDependencyResolver'
in assembly 'Ninject.Web.WebApi, Version=3.3.1.0, Culture=neutral, PublicKeyToken=c7192dc5380945e7'
is not marked as serializable.
the stack trace ->
at
Microsoft.ReportingServices.RdlExpressions.ReportRuntime.ProcessLoadingExprHostException(Exception
e, ProcessingErrorCode errorCode)
at Microsoft.ReportingServices.RdlExpressions.ReportRuntime.LoadCompiledCode(Report report, Boolean
includeParameters, Boolean parametersOnly, ObjectModelImpl reportObjectModel, ReportRuntimeSetup
runtimeSetup)
at Microsoft.ReportingServices.OnDemandProcessing.Merge.Init(Boolean includeParameters, Boolean
parametersOnly)
at Microsoft.ReportingServices.OnDemandProcessing.Merge.Init(ParameterInfoCollection parameters)
at Microsoft.ReportingServices.ReportProcessing.ReportProcessing.ProcessOdpReport(Report report,
OnDemandMetadata odpMetadataFromSnapshot, ProcessingContext pc, Boolean snapshotProcessing, Boolean
reprocessSnapshot, Boolean processUserSortFilterEvent, Boolean processWithCachedData, ErrorContext
errorContext, DateTime executionTime, IChunkFactory cacheDataChunkFactory, StoreServerParameters
storeServerParameters, GlobalIDOwnerCollection globalIDOwnerCollection, SortFilterEventInfoMap
oldUserSortInformation, EventInformation newUserSortInformation, String
oldUserSortEventSourceUniqueName, ExecutionLogContext executionLogContext,
OnDemandProcessingContext& odpContext)
at Microsoft.ReportingServices.ReportProcessing.ReportProcessing.RenderReport(IRenderingExtension
newRenderer, DateTime executionTimeStamp, ProcessingContext pc, RenderingContext rc, IChunkFactory
cacheDataChunkFactory, IChunkFactory yukonCompiledDefinition, Boolean& dataCached)
at Microsoft.Reporting.LocalService.CreateSnapshotAndRender(CatalogItemContextBase itemContext,
ReportProcessing repProc, IRenderingExtension renderer, ProcessingContext pc, RenderingContext rc,
SubreportCallbackHandler subreportHandler, ParameterInfoCollection parameters,
DatasourceCredentialsCollection credentials)
at Microsoft.Reporting.LocalService.Render(CatalogItemContextBase itemContext, Boolean
allowInternalRenderers, ParameterInfoCollection reportParameters, IEnumerable dataSources,
DatasourceCredentialsCollection credentials, CreateAndRegisterStream createStreamCallback,
ReportRuntimeSetup runtimeSetup)
at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean
allowInternalRenderers, String deviceInfo, PageCountMode pageCountMode, CreateAndRegisterStream
createStreamCallback, Warning[]& warnings)
I am not understanding whats the relation between the rdlc and Ninject.
My report generation method ->
public static byte[] ReportByte(string dataSourceName, object dataSource, string reportFileName, FileLayout fileLayout = FileLayout.Potrait)
{
try
{
var lr = new LocalReport();
var path = Path.Combine(HttpContext.Current.Server.MapPath("~/Reports/Rdlc"), reportFileName);
if (!File.Exists(path))
throw new ArgumentException("report not found");
lr.ReportPath = path;
var rd = new ReportDataSource(dataSourceName, dataSource);
lr.DataSources.Add(rd);
string reportType = "pdf";
string mimeType;
string encoding;
string fileNameExtension;
string deviceInfo = string.Empty;
if (fileLayout == FileLayout.Landscape)
{
deviceInfo = "<DeviceInfo>" +
" <OutputFormat>pdf</OutputFormat>" +
" <PageWidth>11in</PageWidth>" +
" <PageHeight>8.5in</PageHeight>" +
" <MarginTop>0.5in</MarginTop>" +
" <MarginLeft>.50in</MarginLeft>" +
" <MarginRight>.50in</MarginRight>" +
" <MarginBottom>0.50in</MarginBottom>" +
"</DeviceInfo>";
}
else
{
deviceInfo = "<DeviceInfo>" +
" <OutputFormat>pdf</OutputFormat>" +
" <PageWidth>8.5in</PageWidth>" +
" <PageHeight>11in</PageHeight>" +
" <MarginTop>0.5in</MarginTop>" +
" <MarginLeft>.50in</MarginLeft>" +
" <MarginRight>.50in</MarginRight>" +
" <MarginBottom>0.50in</MarginBottom>" +
"</DeviceInfo>";
}
Warning[] warnings;
string[] streams;
byte[] renderedBytes;
renderedBytes = lr.Render(
reportType,
deviceInfo,
out mimeType,
out encoding,
out fileNameExtension,
out streams,
out warnings);
return renderedBytes;
}
catch (Exception)
{
throw;
}
}
I am also sharing my dependency kernel creation method ->
private static IKernel CreateKernel()
{
var kernel = new StandardKernel();
try
{
kernel.Load(Assembly.GetExecutingAssembly());
HttpConfiguration.DependencyResolver = kernel.Get<System.Web.Http.Dependencies.IDependencyResolver>();
return kernel;
}
catch
{
kernel.Dispose();
throw;
}
}
and
Seems like Ninject.WebApi 3.3.1 has a serialization issue. I just changed it to 3.3.0. Now everything just works fine for me. Hope this helps other folks who will face the issue in the future.

How to get speed of network, whether it is fast or slow in xamarin iOS?

How to get speed of network, whether it is fast or slow in xamarin iOS?
I used NetworkReachability, But it is giving result that url is reachable or not?
I want to get speed of network , fast or poor?` private static NetworkReachability _defaultRouteReachability;
public static event EventHandler ReachabilityChanged;
public static bool IsNetworkAvailable(string url)
{
if (_defaultRouteReachability == null)
{
_defaultRouteReachability = new NetworkReachability(url);
_defaultRouteReachability.SetNotification(OnChange);
_defaultRouteReachability.Schedule(CFRunLoop.Current, CFRunLoop.ModeDefault);
}
NetworkReachabilityFlags flags;
return _defaultRouteReachability.TryGetFlags(out flags) &&
IsReachableWithoutRequiringConnection(flags);
}
private static bool IsReachableWithoutRequiringConnection(NetworkReachabilityFlags flags)
{
// Is it reachable with the current network configuration?
bool isReachable = (flags & NetworkReachabilityFlags.Reachable) != 0;
// Do we need a connection to reach it?
bool noConnectionRequired = (flags & NetworkReachabilityFlags.ConnectionRequired) == 0;
// Since the network stack will automatically try to get the WAN up,
// probe that
if ((flags & NetworkReachabilityFlags.IsWWAN) != 0)
noConnectionRequired = true;
return isReachable && noConnectionRequired;
}
private static void OnChange(NetworkReachabilityFlags flags)
{
var h = ReachabilityChanged;
if (h != null)
h(null, EventArgs.Empty);
}`
Something like this can help you, unless you want to use a library. This basically gives you the technical definition of internet speed, but the real number will be a little bigger. It's very similar to the solution suggested by #Martheen
public async Task<string> CheckInternetSpeed()
{
//DateTime Variable To Store Download Start Time.
DateTime dt1 = DateTime.Now;
string internetSpeed;
try
{
// Create Object Of WebClient
var client = new HttpClient();
//Number Of Bytes Downloaded Are Stored In ‘data’
byte[] data = await client.GetByteArrayAsync("https://www.example.com/");
//DateTime Variable To Store Download End Time.
DateTime dt2 = DateTime.Now;
//To Calculate Speed in Kb Divide Value Of data by 1024 And Then by End Time Subtract Start Time To Know Download Per Second.
Console.WriteLine("ConnectionSpeed: DataSize (kb) " + data.Length / 1024);
Console.WriteLine("ConnectionSpeed: ElapsedTime (secs) " + (dt2 - dt1).TotalSeconds);
internetSpeed = "ConnectionSpeed: (kb/s) " + Math.Round((data.Length / 1024) / (dt2 - dt1).TotalSeconds, 2);
}
catch (Exception ex)
{
internetSpeed = "ConnectionSpeed:Unknown Exception-" + ex.Message;
}
Console.WriteLine(internetSpeed);
return internetSpeed;
}
Xamarin Essentials library can't get you Internet speed, but if you use dependency injection, you can use the native api s like this.

Memory issues when running Spark job on relatively large input

I am running a spark cluster with 50 machines. Each machine is a VM with 8-core, and 50GB memory (41 seems to be available to Spark).
I am running on several input folders, I estimate the size of input to be ~250GB gz compressed.
Although it seems to me that the amount and configuration of machines I am using seems to be sufficient, after about 40 minutes of run the job fail, I can see following errors in the logs:
2558733 [Result resolver thread-2] WARN org.apache.spark.scheduler.TaskSetManager - Lost task 345.0 in stage 1.0 (TID 345, hadoop-w-3.c.taboola-qa-01.internal): java.lang.OutOfMemoryError: Java heap space
java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
java.lang.StringCoding.decode(StringCoding.java:193)
java.lang.String.<init>(String.java:416)
java.lang.String.<init>(String.java:481)
com.doit.customer.dataconverter.Phase0$3.call(Phase0.java:699)
com.doit.customer.dataconverter.Phase0$3.call(Phase0.java:660)
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:164)
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:164)
org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.FilteredRDD.compute(FilteredRDD.scala:34)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
and also:
2653545 [Result resolver thread-2] WARN org.apache.spark.scheduler.TaskSetManager - Lost task 122.1 in stage 1.0 (TID 392, hadoop-w-22.c.taboola-qa-01.internal): java.lang.OutOfMemoryError: GC overhead limit exceeded
java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
java.lang.StringCoding.decode(StringCoding.java:193)
java.lang.String.<init>(String.java:416)
java.lang.String.<init>(String.java:481)
com.doit.customer.dataconverter.Phase0$3.call(Phase0.java:699)
com.doit.customer.dataconverter.Phase0$3.call(Phase0.java:660)
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:164)
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:164)
org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.FilteredRDD.compute(FilteredRDD.scala:34)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
How do I go about debugging such an issue?
EDIT: I Found the root cause of the problem. It is this piece of code:
private static final int MAX_FILE_SIZE = 40194304;
....
....
JavaPairRDD<String, List<String>> typedData = filePaths.mapPartitionsToPair(new PairFlatMapFunction<Iterator<String>, String, List<String>>() {
#Override
public Iterable<Tuple2<String, List<String>>> call(Iterator<String> filesIterator) throws Exception {
List<Tuple2<String, List<String>>> res = new ArrayList<>();
String fileType = null;
List<String> linesList = null;
if (filesIterator != null) {
while (filesIterator.hasNext()) {
try {
Path file = new Path(filesIterator.next());
// filter non-trc files
if (!file.getName().startsWith("1")) {
continue;
}
fileType = getType(file.getName());
Configuration conf = new Configuration();
CompressionCodecFactory compressionCodecs = new CompressionCodecFactory(conf);
CompressionCodec codec = compressionCodecs.getCodec(file);
FileSystem fs = file.getFileSystem(conf);
ContentSummary contentSummary = fs.getContentSummary(file);
long fileSize = contentSummary.getLength();
InputStream in = fs.open(file);
if (codec != null) {
in = codec.createInputStream(in);
} else {
throw new IOException();
}
byte[] buffer = new byte[MAX_FILE_SIZE];
BufferedInputStream bis = new BufferedInputStream(in, BUFFER_SIZE);
int count = 0;
int bytesRead = 0;
try {
while ((bytesRead = bis.read(buffer, count, BUFFER_SIZE)) != -1) {
count += bytesRead;
}
} catch (Exception e) {
log.error("Error reading file: " + file.getName() + ", trying to read " + BUFFER_SIZE + " bytes at offset: " + count);
throw e;
}
Iterable<String> lines = Splitter.on("\n").split(new String(buffer, "UTF-8").trim());
linesList = Lists.newArrayList(lines);
// get rid of first line in file
Iterator<String> it = linesList.iterator();
if (it.hasNext()) {
it.next();
it.remove();
}
//res.add(new Tuple2<>(fileType,linesList));
} finally {
res.add(new Tuple2<>(fileType, linesList));
}
}
}
return res;
}
Particularly allocating a buffer of size 40M for each file in order to read the content of the file using BufferedInputStream. This causes the stack memory to end at some point.
The thing is:
If I read line by line (which does not require a buffer), it will be
very non-efficient read
If I allocate one buffer and reuse it for
each file read - is it possible in parallelism sense? Or will it get
overwritten by several threads?
Any suggestions are welcome...
EDIT 2: Fixed first memory issue by moving the byte array allocation outside the iterator, so it gets reused by all partition elements. But there is still the new String(buffer, "UTF-8").trim()) which gets created for the split purpose - that's an object that gets also created every time. I could use a stringbuffer/builder but then how would I set the charset encoding without a String object ?
Eventually I changed the code as follows:
// Transform list of files to list of all files' content in lines grouped by type
JavaPairRDD<String,List<String>> typedData = filePaths.mapToPair(new PairFunction<String, String, List<String>>() {
#Override
public Tuple2<String, List<String>> call(String filePath) throws Exception {
Tuple2<String, List<String>> tuple = null;
try {
String fileType = null;
List<String> linesList = new ArrayList<String>();
Configuration conf = new Configuration();
CompressionCodecFactory compressionCodecs = new CompressionCodecFactory(conf);
Path path = new Path(filePath);
fileType = getType(path.getName());
tuple = new Tuple2<String, List<String>>(fileType, linesList);
// filter non-trc files
if (!path.getName().startsWith("1")) {
return tuple;
}
CompressionCodec codec = compressionCodecs.getCodec(path);
FileSystem fs = path.getFileSystem(conf);
InputStream in = fs.open(path);
if (codec != null) {
in = codec.createInputStream(in);
} else {
throw new IOException();
}
BufferedReader r = new BufferedReader(new InputStreamReader(in, "UTF-8"), BUFFER_SIZE);
// Get rid of the first line in the file
r.readLine();
// Read all lines
String line;
while ((line = r.readLine()) != null) {
linesList.add(line);
}
} catch (IOException e) { // Filtering of files whose reading went wrong
log.error("Reading of the file " + filePath + " went wrong: " + e.getMessage());
} finally {
return tuple;
}
}
});
So now I do not use a buffer in size of 40M but rather build the lines list dynamically using an array list. This solved my current memory issue, but now I got other strange errors failing the job. Will report those in a different question...

How to decode protobuf binary response

I have created a test app that can recognize some image using Goggle Goggles. It works for me, but I receive binaryt protobuf response. I have no proto-files, just binary response. How can I get data from it? (Have sent some image with bottle of bear and got the nex response):
A
TuborgLogo9 HoaniText���;�)b���2d8e991bff16229f6"�
+TR=T=AQBd6Cl4Kd8:X=OqSEi:S=_rSozFBgfKt5d9b0
+TR=T=6rLQxKE2xdA:X=OqSEi:S=gd6Aqb28X0ltBU9V
+TR=T=uGPf9zJDWe0:X=OqSEi:S=32zTfdIOdI6kuUTa
+TR=T=RLkVoGVd92I:X=OqSEi:S=P7yOhvSAOQW6SRHN
+TR=T=J1FMvNmcyMk:X=OqSEi:S=5Z631_rd2ijo_iuf�
need to get string "Tuborg" and if possible type - "Logo"
You can decode with protoc:
protoc --decode_raw < msg.bin
UnknownFieldSet.parseFrom(msg).toString()
This will show you the top level fields. Unfortunately it can't know the exact details of field types. long/int/bool/enum etc are all encoded as Varint and all look the same. Strings, byte-arrays and sub-messages are length-delimited and are also indistinguishable.
Some useful details here: https://github.com/dcodeIO/protobuf.js/wiki/How-to-reverse-engineer-a-buffer-by-hand
If you follow the code in the UnknownFieldSet.mergeFrom() you'll see how you could try decode sub-messages and falling back to strings if that fails - but it's not going to be very reliable.
There are 2 spare values for the wiretype in the protocol - it would have been really helpful if google had used one of these to denote sub-messages. (And the other for null values perhaps.)
Here's some very crude rushed code which attempts to produce a something useful for diagnostics. It guesses at the data types and in the case of strings and sub-messages it will print both alternatives in some cases. Please don't trust any values it prints:
public static String decodeProto(byte[] data, boolean singleLine) throws IOException {
return decodeProto(ByteString.copyFrom(data), 0, singleLine);
}
public static String decodeProto(ByteString data, int depth, boolean singleLine) throws IOException {
final CodedInputStream input = CodedInputStream.newInstance(data.asReadOnlyByteBuffer());
return decodeProtoInput(input, depth, singleLine);
}
private static String decodeProtoInput(CodedInputStream input, int depth, boolean singleLine) throws IOException {
StringBuilder s = new StringBuilder("{ ");
boolean foundFields = false;
while (true) {
final int tag = input.readTag();
int type = WireFormat.getTagWireType(tag);
if (tag == 0 || type == WireFormat.WIRETYPE_END_GROUP) {
break;
}
foundFields = true;
protoNewline(depth, s, singleLine);
final int number = WireFormat.getTagFieldNumber(tag);
s.append(number).append(": ");
switch (type) {
case WireFormat.WIRETYPE_VARINT:
s.append(input.readInt64());
break;
case WireFormat.WIRETYPE_FIXED64:
s.append(Double.longBitsToDouble(input.readFixed64()));
break;
case WireFormat.WIRETYPE_LENGTH_DELIMITED:
ByteString data = input.readBytes();
try {
String submessage = decodeProto(data, depth + 1, singleLine);
if (data.size() < 30) {
boolean probablyString = true;
String str = new String(data.toByteArray(), Charsets.UTF_8);
for (char c : str.toCharArray()) {
if (c < '\n') {
probablyString = false;
break;
}
}
if (probablyString) {
s.append("\"").append(str).append("\" ");
}
}
s.append(submessage);
} catch (IOException e) {
s.append('"').append(new String(data.toByteArray())).append('"');
}
break;
case WireFormat.WIRETYPE_START_GROUP:
s.append(decodeProtoInput(input, depth + 1, singleLine));
break;
case WireFormat.WIRETYPE_FIXED32:
s.append(Float.intBitsToFloat(input.readFixed32()));
break;
default:
throw new InvalidProtocolBufferException("Invalid wire type");
}
}
if (foundFields) {
protoNewline(depth - 1, s, singleLine);
}
return s.append('}').toString();
}
private static void protoNewline(int depth, StringBuilder s, boolean noNewline) {
if (noNewline) {
s.append(" ");
return;
}
s.append('\n');
for (int i = 0; i <= depth; i++) {
s.append(INDENT);
}
}
I'm going to assume the real question is how to decode protobufs and not how to read binary from the wire using Java.
The answer to your question can be found here
Briefly, on the wire, protobufs are encoded as 3-tuples of <key,type,value>, where:
the key is the field number assigned to the field in the .proto schema
the type is one of <Varint, int32, length-delimited, start-group, end-group,int64. It contains just enough information to decode the value of the 3-tuple, namely it tells you how long the value is.

Can I copy multiple rows from the Visual Studio "Find Symbol Results" window?

Does anyone know how to copy all the lines in the Visual Studio "Find Symbol Results" window onto the clipboard? You can copy a single line, but I want to copy them all.
I can't believe that I'm the first one to want to do this, but I can't even find a discussion about this apparently missing feature.
Here is some code that uses the .Net Automation library to copy all the text to the clipboard.
Start a new WinForms project and then add the following references:
WindowsBase
UIAutomationTypes
UIAutomationClient
System.Xaml
PresentationCore
PresentationFramework
System.Management
The code also explains how to setup a menu item in visual studio to copy the contents to the clipboard.
Edit: The UI Automation only returns visible tree view items. Thus, to copy all the items, the find symbol results window is set as foreground, and then a {PGDN} is sent, and the next batch of items is copied. This process is repeated until no new items are found. It would have been preferable to use the ScrollPattern, however it threw an Exception when trying to set the scroll.
Edit 2: Tried to improve the performance of AutomationElement FindAll by running on a separate thread. Seems to be slow in some cases.
Edit 3: Improved performance by making the TreeView window very large. Can copy about 400 items in about 10 seconds.
Edit 4: Dispose objects implementing IDisposable. Better message reporting. Better handling of process args. Put window back to its original size.
using System;
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
using System.Management;
using System.Runtime.InteropServices;
using System.Text;
using System.Threading;
using System.Windows.Automation;
using System.Windows.Forms;
namespace CopyFindSymbolResults {
// This program tries to find the 'Find Symbol Results' window in visual studio
// and copy all the text to the clipboard.
//
// The Find Symbol Results window uses a TreeView control that has the class name 'LiteTreeView32'
// In the future if this changes, then it's possible to pass in the class name as the first argument.
// Use TOOLS -> Spy++ to determine the class name.
//
// After compiling this code into an Exe, add a menu item (TOOLS -> Copy Find Symbol Results) in Visual Studio by:
// 1) TOOLS -> External Tools...
// (Note: in the 'Menu contents:' list, count which item the new item is, starting at base-1).
// Title: Copy Find Symbol Results
// Command: C:\<Path>\CopyFindSymbolResults.exe (e.g. C:\Windows\ is one option)
// 2) TOOLS -> Customize... -> Keyboard... (button)
// Show Commands Containing: tools.externalcommand
// Then select the n'th one, where n is the count from step 1).
//
static class Program {
enum Tabify {
No = 0,
Yes = 1,
Prompt = 2,
}
[STAThread]
static void Main(String[] args) {
String className = "LiteTreeView32";
Tabify tabify = Tabify.Prompt;
if (args.Length > 0) {
String arg0 = args[0].Trim();
if (arg0.Length > 0)
className = arg0;
if (args.Length > 1) {
int x = 0;
if (int.TryParse(args[1], out x))
tabify = (Tabify) x;
}
}
DateTime startTime = DateTime.Now;
Data data = new Data() { className = className };
Thread t = new Thread((o) => {
GetText((Data) o);
});
t.IsBackground = true;
t.Start(data);
lock(data) {
Monitor.Wait(data);
}
if (data.p == null || data.p.MainWindowHandle == IntPtr.Zero) {
System.Windows.Forms.MessageBox.Show("Cannot find Microsoft Visual Studio process.");
return;
}
try {
SimpleWindow owner = new SimpleWindow { Handle = data.MainWindowHandle };
if (data.appRoot == null) {
System.Windows.Forms.MessageBox.Show(owner, "Cannot find AutomationElement from process MainWindowHandle: " + data.MainWindowHandle);
return;
}
if (data.treeViewNotFound) {
System.Windows.Forms.MessageBox.Show(owner, "AutomationElement cannot find the tree view window with class name: " + data.className);
return;
}
String text = data.text;
if (text.Length == 0) { // otherwise Clipboard.SetText throws exception
System.Windows.Forms.MessageBox.Show(owner, "No text was found: " + data.p.MainWindowTitle);
return;
}
TimeSpan ts = DateTime.Now - startTime;
if (tabify == Tabify.Prompt) {
var dr = System.Windows.Forms.MessageBox.Show(owner, "Replace dashes and colons for easy pasting into Excel?", "Tabify", System.Windows.Forms.MessageBoxButtons.YesNo);
if (dr == System.Windows.Forms.DialogResult.Yes)
tabify = Tabify.Yes;
ts = TimeSpan.Zero; // prevent second prompt
}
if (tabify == Tabify.Yes) {
text = text.Replace(" - ", "\t");
text = text.Replace(" : ", "\t");
}
System.Windows.Forms.Clipboard.SetText(text);
String msg = "Data is ready on the clipboard.";
var icon = System.Windows.Forms.MessageBoxIcon.None;
if (data.lines != data.count) {
msg = String.Format("Only {0} of {1} rows copied.", data.lines, data.count);
icon = System.Windows.Forms.MessageBoxIcon.Error;
}
if (ts.TotalSeconds > 4 || data.lines != data.count)
System.Windows.Forms.MessageBox.Show(owner, msg, "", System.Windows.Forms.MessageBoxButtons.OK, icon);
} finally {
data.p.Dispose();
}
}
private class SimpleWindow : System.Windows.Forms.IWin32Window {
public IntPtr Handle { get; set; }
}
private const int TVM_GETCOUNT = 0x1100 + 5;
[DllImport("user32.dll")]
static extern int SendMessage(IntPtr hWnd, int msg, int wparam, int lparam);
[DllImport("user32.dll", SetLastError = true)]
static extern bool MoveWindow(IntPtr hWnd, int X, int Y, int Width, int Height, bool Repaint);
private class Data {
public int lines = 0;
public int count = 0;
public IntPtr MainWindowHandle = IntPtr.Zero;
public IntPtr TreeViewHandle = IntPtr.Zero;
public Process p;
public AutomationElement appRoot = null;
public String text = null;
public String className = null;
public bool treeViewNotFound = false;
}
private static void GetText(Data data) {
Process p = GetParentProcess();
data.p = p;
if (p == null || p.MainWindowHandle == IntPtr.Zero) {
data.text = "";
lock(data) { Monitor.Pulse(data); }
return;
}
data.MainWindowHandle = p.MainWindowHandle;
AutomationElement appRoot = AutomationElement.FromHandle(p.MainWindowHandle);
data.appRoot = appRoot;
if (appRoot == null) {
data.text = "";
lock(data) { Monitor.Pulse(data); }
return;
}
AutomationElement treeView = appRoot.FindFirst(TreeScope.Subtree, new PropertyCondition(AutomationElement.ClassNameProperty, data.className));
if (treeView == null) {
data.text = "";
data.treeViewNotFound = true;
lock(data) { Monitor.Pulse(data); }
return;
}
data.TreeViewHandle = new IntPtr(treeView.Current.NativeWindowHandle);
data.count = SendMessage(data.TreeViewHandle, TVM_GETCOUNT, 0, 0);
RECT rect = new RECT();
GetWindowRect(data.TreeViewHandle, out rect);
// making the window really large makes it so less calls to FindAll are required
MoveWindow(data.TreeViewHandle, 0, 0, 800, 32767, false);
int TV_FIRST = 0x1100;
int TVM_SELECTITEM = (TV_FIRST + 11);
int TVGN_CARET = TVGN_CARET = 0x9;
// if a vertical scrollbar is detected, then scroll to the top sending a TVM_SELECTITEM command
var vbar = treeView.FindFirst(TreeScope.Subtree, new PropertyCondition(AutomationElement.NameProperty, "Vertical Scroll Bar"));
if (vbar != null) {
SendMessage(data.TreeViewHandle, TVM_SELECTITEM, TVGN_CARET, 0); // select the first item
}
StringBuilder sb = new StringBuilder();
Hashtable ht = new Hashtable();
int chunk = 0;
while (true) {
bool foundNew = false;
AutomationElementCollection treeViewItems = treeView.FindAll(TreeScope.Subtree, new PropertyCondition(AutomationElement.ControlTypeProperty, ControlType.TreeItem));
if (treeViewItems.Count == 0)
break;
if (ht.Count == 0) {
chunk = treeViewItems.Count - 1;
}
foreach (AutomationElement ele in treeViewItems) {
try {
String n = ele.Current.Name;
if (!ht.ContainsKey(n)) {
ht[n] = n;
foundNew = true;
data.lines++;
sb.AppendLine(n);
}
} catch {}
}
if (!foundNew || data.lines == data.count)
break;
int x = Math.Min(data.count-1, data.lines + chunk);
SendMessage(data.TreeViewHandle, TVM_SELECTITEM, TVGN_CARET, x);
}
data.text = sb.ToString();
MoveWindow(data.TreeViewHandle, rect.Left, rect.Top, rect.Right - rect.Left, rect.Bottom - rect.Top, false);
lock(data) { Monitor.Pulse(data); }
}
// this program expects to be launched from Visual Studio
// alternative approach is to look for "Microsoft Visual Studio" in main window title
// but there could be multiple instances running.
private static Process GetParentProcess() {
// from thread: http://stackoverflow.com/questions/2531837/how-can-i-get-the-pid-of-the-parent-process-of-my-application
int myId = 0;
using (Process current = Process.GetCurrentProcess())
myId = current.Id;
String query = String.Format("SELECT ParentProcessId FROM Win32_Process WHERE ProcessId = {0}", myId);
using (var search = new ManagementObjectSearcher("root\\CIMV2", query)) {
using (ManagementObjectCollection list = search.Get()) {
using (ManagementObjectCollection.ManagementObjectEnumerator results = list.GetEnumerator()) {
if (!results.MoveNext()) return null;
using (var queryObj = results.Current) {
uint parentId = (uint) queryObj["ParentProcessId"];
return Process.GetProcessById((int) parentId);
}
}
}
}
}
[DllImport("user32.dll")]
private static extern bool GetWindowRect(IntPtr hWnd, out RECT lpRect);
[StructLayout(LayoutKind.Sequential)]
private struct RECT {
public int Left;
public int Top;
public int Right;
public int Bottom;
}
}
}
I solved this by using Macro Express. They have a free 30-day trial which I used because this is a one-off for me. I wrote a simple macro to copy all of the Find Symbol Results, one line at a time, over into a Notepad document.
Sequence:
* Repeat (x) times (however many symbol results you have)
* Activate Find Symbol Results window
* Delay .5 seconds
* Simulate keystrokes "Arrow Down"
* Clipboard Copy
* Activate Notepad window
* Delay .5 seconds
* Clipboard Paste
* Simulate keystrokes "ENTER"
* End Repeat
Had the same requirement and were solving this by using a screenshot tool named Hypersnap which also has some basic OCR functionality.
If you can encode your symbol as an expression for global Find, then copy-pasting all results from the Find Results window is easy.
eg finding all references of property 'foo' you might do a global find for '.foo'
I'm using Visual Studio 2017 (Version 15.6.4) and it has the functionality directly. Here's an example screenshot (I hit Ctrl+A to select all lines):
The text output for that example is this:
Status Code File Line Column Project
Console.WriteLine(xml); C:\Users\blah\Documents\ConsoleApp1\Program.cs 30 12 ConsoleApp1
static void Console.WriteLine(object)
Console.WriteLine(xml); C:\Users\blah\Documents\ConsoleApp1\Program.cs 18 12 ConsoleApp1
Console.WriteLine(xml); C:\Users\blah\Documents\ConsoleApp1\Program.cs 25 12 ConsoleApp1
I had the same problem. I had to make a list of all the occurences of a certain method and some of its overloaded version.
To solve my problem I used ReSharper. (ReSharper -> Find -> Find Usages Advanced).
It also has a very nice tabulated text export feature.
From my previous experience and a few tests I just did, there is no built in feature to do this.
Why do you want to do this? Why do you want to copy all of the references to the clipboard? As I understand it the speed of these features would make having a static copy of all the references would be relatively useless if you can generate a dynamic and complete copy quickly.
You can always extend visual studio to add this functionality, see
this post at egghead cafe.
Hey Somehow you can achieve this in another way,
Just 'FindAll' the selected text and you will be able to catch all the lines
Visual Studio Code works as a browser.
it is possible to open the developer tools and search for that part of the code
Menu: Help > Toggle Developer tools
write the following instructions to the developer tools console:
var elementos = document.getElementsByClassName("plain match")
console.log(elementos.length)
for(var i = 0; i<elementos.length;i++) console.log(elementos[i].title)
and you can see the match results.
Now if you can copy the results

Resources