Python package for conversion of binary_liitle_endian to ascii for a ply format file? - ascii

I am looking to convert a little endian binary TO ASCII for a ply file. I am looking for a package in python to do the same.
This can be easily done using cloud compare software but I am looking for a python package. I was trying uu.decode but it didn't seem to help.

I'm actually the author of exactly such a package: https://github.com/dranjan/python-plyfile
It should be as simple as reading in the PLY file and setting the text attribute:
data = plyfile.PlyData.read('somefile.ply')
data.text = True
data.write('somefile-txt.ply')

Related

PYSPARK - Reading, Converting and splitting a EBCDIC Mainframe file into DataFrame

We have an EBCDIC Mainframe format file which is already loaded into Hadoop HDFS Sytem. The File has the Corresponding COBOL structure as well. We have to Read this file from HDFS, Convert the file data into ASCII format and need to split the data into Dataframe based on its COBOL Structure. I've tried some options which didn't seem to work. Could anyone please suggest us some proven or working ways.
For python, take a look at the Copybook package (https://github.com/zalmane/copybook). It supports most features of Copybook includes REDEFINES and OCCURS as well as a wide variety of PIC formats.
pip install copybook
root = copybook.parse_file('sample.cbl')
For parsing into a PySpark dataframe, you can use a flattened list of fields and use a UDF to parse based on the offsets:
offset_list = root.to_flat_list()
disclaimer : I am the maintainer of https://github.com/zalmane/copybook
Find the COBOL Language Reference manual and research functions DISPLAY-OF and National-Of. The link : https://www.ibm.com/support/pages/how-convert-ebcdic-ascii-or-ascii-ebcdic-cobol-program.

Conversion between knitr and sweave

This might have been asked before, but until now I couldn't find a really helpful answer for me.
I am using R Studio with knitr and a colleague of mine who I need to cooperate with uses the sweave format. Is there a good way to convert a script back and forth between these two?
I have already found "Sweave2knitr" and hoped this would have an .rmd as output with all chunks changed (<<>> to {} etc.) but this is not the case. My main problem is that I would also need the option to convert from .rmd back to .rnw so that my colleague can also re-edit my work-over.
Thanks a lot!
To process the code chunks and convert the .Rnw file to .tex, you use the knit() function in the knitr package rather than Sweave().
R -e 'library(knitr);knit("my_file.Rnw")'
Sweave2knitr() is for converting old Sweave-based .Rnw files to the knitr syntax.
In Program defaults change :
Weave Rnw files using Sweave or knitr
The Rnw format is really LaTeX with some modifications, whereas the Rmd format is Markdown with some modifications. There are two main flavours of Rnw, the one used by Sweave being the original, and the one used by knitr being a modification of it, but they are very similar.
It's not hard to change Sweave flavoured Rnw to knitr flavoured Rnw (that's what Sweave2knitr does), but changing either one to Rmd would require extensive changes, and probably isn't feasible: certainly I'd expect a lot of manual work after the change.
So for your joint work with a co-author, I would recommend that you settle on a single format, and just use that. I would choose Rmd for this: it's much easier for your co-author to learn Markdown than for you to learn LaTeX. (If you already know LaTeX, that might push the choice the other way.)

I have a telephony log in base64 on a mac that I can't make sense of

I'm digging into a log file of telephony data on a mac, there are a few entries that are intelligible plaintext but most of it is base64 and without knowing what it originally represented I haven't been able to figure out how to decode it into anything meaningful. They're 108-character blocks that I am completely certain are base64 (all the right characters for base64 and none that aren't, end in equals signs), but I am at a loss as to how to get anything useful out of them.
Someone previously was able to use this data productively, but how isn't documented. Does anyone have an idea what it would have been before it was base64 or how to get it back into a usable format?
Why don't you try a Python script?
There is a post that can help you:
Python base64 data decode
Check it out! There is an answer that can really help you.
If you don't know how to use python, here is an official Beginner's Guide:
https://www.python.org/about/gettingstarted/
Download it from here:
https://www.python.org/downloads/mac-osx/
I would write a Python program like this:
import base64
file = open('yourlog.log','r')
outputfile = open('result.log','wb')
for line in file:
decoded_line = base64.b64decode(line)
outputfile.write(decoded_line)
file.close()
outputfile.close()
print('Finished!')

Reading from compressed FASTA bz2 file using skbio

Is it possible to read from a compressed file (e.g., FASTA bz2)? I usually use skbio.sequence.Sequence.read but don't see this option there.
Thanks!
This is possible to do as follows:
import skbio
seq = skbio.io.read("seqs.fna.bz2", format='fasta', compression='bz2', into=skbio.DNA)
I'm using scikit-bio 0.5.0, but this should be possible with earlier versions as well. While I'm explicitly defining the compression type, that's generally not necessary.
The relevant documentation is here and here.

I need to write a .DDS file cross-platform, can someone point me to example?

I need to create a .DDS file with code that runs on both OSX and Windows. Although the format doesn't look difficult, I'd still like an example of writing the file. Note I don't need to read it, just write it.
C or C++ and RGBA bitmap.
I finally resorted to written a RAW file, and using GraphicConvertor (mac) to read it and write the DDS file. I think Photoshop can do it too. RAW files are simply RGB or RGBA or similar formats written straight to a binary file. Then in the reading application you tell it the dimensions so it can read it in. Then you export to whatever. Not a perfect solution but it worked for what I needed.

Resources