I'm writing a small application that's supposed to count the number of colour and grayscale jobs going to local printers.
It's meant to cross reference with what the printer records. If there is any non-grayscale elements on the job, the printer records a colour job.
At the moment I'm using the EnumJobs call but I can't get any colour info on it. I'm starting to think I'll have to read the spl files that are spooling to scan it for a colour palette...
I've tried a few print auditing applications and some of them are able to do it.
Any pointers will be much appreciated!
Related
I am a stamp collector and trying to work with colors on stamps. specifically remove the color of the paper the stamp is printed on as well as any cancellation marks and return the colors of the stamp itself. Right now I have scanned them into .png files but can certainly change them to other file types. I have tried many suggestion that I have found on the internet, but none seem to help or not work. I am very very very naïve on python programming (using pycharm) to write my own script. Can anyone help me? Please keep it basic. I am slowly learning python, it took me about a week to finally be able to read a file and display it. I had a lot of programming when I was in college (1976-83), ancient languages but consider myself capable of learning. Help help help
I tried all scripts that try to return rgb values of an image, no luck. I've tried to convert the image to an Excel files. Some internet programs seem to be what I am looking for but do not run. I always seem to get the program to run but no output and then when I try to run it again, I get a warning that I cannot run "****" in parallel.
I am trying to open NASA's Clementine LIDAR data of the Lunar surface (Link: https://pds-geosciences.wustl.edu/missions/clementine/lidar.html). The data is saved as a *.tab file which is really an ASCII file. When I look for the data I need (x,y,z,r,g,b) I can only find data points x,y,z but not r,g,b.
Main Question If my goal is to open this file in CloudCompare and develop a mesh/dem file from this, do I need r,g,b data?
Side Questions If so, how do you recommend I get it from data that is from the 90s? Or atleast, how would I go about opening this.
As far as I know, there is no need to have R,G,B in CloudCompare, but you definitely will have to prepare the data in the format that is readable by CloudCompare. That shouldn't be a problem using simple Python scripts. If you further help, let me know.
You do not need the RGB data to open it on CloudCompare, just the point coordinates. There is no way of retrieving the radiometric information of this point cloud unless you have some imagery of the same epoch. Then you could associate the RGB to the point cloud by using collinearity equations, for instance.
I'm trying to read data from the Yost Labs 3-Space Sensor Nano into LabVIEW via an NI MyRIO (1900). I was able to set up a sequence that communicates with the sensor through SPI. However, every time I run the program, it just spits out a single value of 255.
I think understand that I need to include something that allows all the bytes to be read. I just don't know how to go about it.
As an example, I'm trying to read the gyros (0x26) which have a return length of 12 and is a vector (float x3).
Here is my labview code
and here is the manual for the sensor. The commands I'm using are on pages 29-33. In the image, 0x2B is 'read temperature'.
Any help would be greatly appreciated! Thanks :)
Edit: i had messed up the wiring so now the output jumps between ~35 to 255. I'm still having trouble getting all 3 gyro values from the SPI read.
Quoting from Joe Friedrichsen in his comment:
The express block that resets the sensor is not guaranteed to precede the loop because there is no data flow between them. The LabVIEW runtime can see two independent and parallel groups and may choose to execute them simultaneously (which on the wire might mean reset comes between loop commands) or in "reverse" order. Add a wire from reset block to create a terminal on the loop.
Here's a picture of the fix.
You may wish to consider stringing the error wire through your program and wiring it to the stop terminal of the While Loop. Currently, your loop will keep running even if there's a fault in your hardware. Using the error wire would eliminate the need for the flat sequence structure.
Hope someone here can help me with this, or know where I can ask.
I have made a distribution model using maxent (versio 3.3.3) in R (dismo package), and thereafter made a map of limiting factors as described in the appendix of Elith et. al. (http://onlinelibrary.wiley.com/doi/10.1111/j.2041-210X.2010.00036.x/full), using the maxent software through the windows cmd window. The instructions have worked fine, and I now have the limiting factors map in a file called lf_map.asc (ca 10 GB). In order to open the map in ArcGis, I have imported the asc-file as a raster into R, and saved it as a tif-file, using this R-script:
lf_map<- raster("//home//...//lf_map.asc")
writeRaster(lf_map,"//home//...//lf_map.tif")
When I open it in ArcGis, the different variables(factors) from the model have the names 0-4 in the map (I have 5 variables in the model), but now I don't know which variables belong to which number. I have also tried to use the ASCII to Raster (Conversion) tool in ArcGis, but the names still come out as 0 to 4, and not as the names of the variables. Does anyone know how to find out this?
Best regards
Kristin
I stumbled over the answer myself: I checked the cmd-window where I had run the script for the Limiting factors.map (it was still available, since it was in a detached screen), and now I saw that when the process was finished, the information about which number equaled which variable was printed in the cmd-window. Apparently, the variables were sorted alphabetically. 0=Aspect; 1= Mean summer temp etc.
I have few tens of full sky maps, in binary format (FITS) of about 600MB each.
For each sky map I already have a catalog of the position of few thousand sources, i.e. stars, galaxies, radio sources.
For each source I would like to:
open the full sky map
extract the relevant section, typically 20MB or less
run some statistics on them
aggregate the outputs to a catalog
I would like to run hadoop, possibly using python via the streaming interface, to process them in parallel.
I think the input to the mapper should be each record of the catalogs,
then the python mapper can open the full sky map, do the processing and print the output to stdout.
Is this a reasonable approach?
If so, I need to be able to configure hadoop so that a full sky map is copied locally to the nodes that are processing one of its sources. How can I achieve that?
Also, what is the best way to feed the input data to hadoop? for each source I have a reference to the full sky map, latitude and longitude
Though it doesn't sound like a few tens of your sky maps are a very big data set, I've used Hadoop successfully as an simple way to write distributed applications/scripts.
For the problem you describe, I would try implementing a solution with Pydoop, and specifically Pydoop Script (full disclaimer: I'm one of the Pydoop developers).
You could set up a job that takes as input the list of sections of the sky map that you want to process, serialized in some sort of text format with one record per line. Each map task should process one of these; you can achieve this split easily with the standard NLineInputFormat.
You don't need to copy the sky map locally to all the nodes as long as the map tasks can access the file system on which it is stored. Using the pydoop.hdfs module, the map function can read the section of the sky map that it needs to process (given the coordinates it received as input) and then emit the statistics as you were saying so that they can be aggregated in the reducer. pydoop.hdfs can read from both "standard" mounted file systems and HDFS.
Though the problem domain is totally unrelated, this application may serve as an example:
https://github.com/ilveroluca/seal/blob/master/seal/dist_bcl2qseq.py#L145
It uses the same strategy, preparing a list of "coordinates" to be processed, serializing them to a file, and then launching a simple pydoop job that takes that file as input.
Hope that helps!