How do I plot some data (using xmgrace in the terminal) using dots, not lines, without explicitly changing it in the GUI? - bash

i'm using xmgrace in the terminal, and want the data to be displayed directly as dots instead of lines. Achieving this in the GUI is simple, but I have to read in multiple files, and do not want to change it every time i start xmgrace. Can I add a command to the files that are read in? Or can I use an option in the terminal when I start xmgrace?

The correct way to set the appearance of a plot from the commandline is to use an existing parameter file, specified using the flag
-param settings.par
The parameter file can be stored beforehand, using the GUI to modify the appearance of an existing, similar plot. Modify the plot as you like, then save the appearance settings in a parameter file (convention is to use the extension .par) using Plot > Save Parameters.
A typical example command would then be
xmgrace -block data2.dat -bxy 1:4 -block data2.dat -bxy 1:6 -param settings.par
In my experience, calling the
-param
flag last thing in your command works best.
There really is no need to be manually text-editing your grace plot files (.agr) to achieve this.

xmgrace has a full and complex language for expressing the configuration of the look and feel for the graph. There are two ways to go about what you described. The simple way is to load the dataset into xmgrace, change everything to make it look the way you want, then save the dataset. You will see the dataset now has tons of lines describing the configuration "#g0 on" "# s0 linestyle 1" etc with your dataset at the end, terminated by a &.
To replicate that graph, spit out the saved header, insert your data, and the insert the trailing &. Feed the result into xmgrace and everything will be all set up. Once you get comfortable you can start doing dynamic substitutions to rename the graph or change the symbol or whatever. See /usr/share/grace/examples for examples of what grace can do (and the config files which generate that).
The more complex method is to load the dataset, save it immediately, change it to look the way you want, and then save it again under a different name. Run diff on the two files and you will get a set of changes. You might need at most a handful of other lines from the non-changing portion, but that is somewhat rare. This produces the minimal set of fixed headers you need to prepend to the dataset. It usually isn't worth the effort to reduce the prefix size.

Related

what is the best output format / platform to display different sorts of extracted data?

I am writing a script, that extracts different types of data from different kind of custom log files.
But before I continue to write, I want to determine in what output format / platform I want it to be, so it is displayed properly or it can be read properly.
examples:
sometimes it is certain lines of text with an important word in it
sometimes it is a block of text between a start and end phrase
sometimes it are data points, which i then want to visualize better in a line chart
....
OR it is a combination of those
At first i thought i write it so that it is in a markdown file format, so i can for instance create fold able blocks, so that i just unfold the part that i want to read.
But markdown is not versatile. Meaning I cant create line charts or other kinds of stuff (thinking about the future)
So know I put the different types of data in different type of output formats and visualize them in an HTML file.
meaning, the blocks of text in a markdown file, which I then import though a java-script markdown viewer
the data points, I create a line chart through a java-script chart
.....
HOWEVER, I am not sure that this is the best/correct way to go .....
What is your advice ?

Is it possible to specify a list of good names for pylint just within a single python file?

I'm looking for something like
[BASIC]
good-names=X,
y
as in pylintrc, but I'd like to limit these names to be good only within a single python file.
I thought about message control like #pylint: disable=invalid-names on top of the file, but that is too broad. Ideally, I'd like to only allow these two invalid names X and y to be considered good within a single file. Is that possible with pylint?
Only way I have been able to achieve this effect has been to disable and then immediately enable again immediately afterwards. It's not what you wanted but at least it doesn't ruin the whole file, and a comment of # pylint: enable=xxx is easy to find when you want to go cleaning up later on (like if they add good-names to in-file message control)

Is there a way to change the projection in a topojson file?

I am trying to create a topojson file projected using geoAlbersUsa, originating from the US Census's ZCTA (Zip Codes, essentially) shapefile. I was able to successfully get through the examples in the excellent https://medium.com/#mbostock/command-line-cartography-part-1-897aa8f8ca2c using the specified maps, and now I'm trying to get the same result using the Zip Code-level shapefiles.
I keep running into various issues due to the size of the file and the length of the strings within the file. While I have been able to create a geojson file and a topojson file, I haven't been able to give it the geoAlbersUsa projection I want. I was hoping to find something to convert the current topojson file into a topojson file with a geoAlbersUsa projection but I haven't been able to find any way.
I know this can be done programmatically in the browser, but everything I've read indicates that performance will be significantly better if as much as possible can be done in the files themselves first.
Attempt 1: I was able to convert the ZCTA-level shapefile to a geojson file successfully using shp2json (as in Mike Bostock's example) but when I try to run geoproject (from d3-geo-projection) I get errors related to excessive string length. In node (using npm) I installed d3-geo-projection (npm install -g d3-geo-projection) then ran the following:
geoproject "d3.geoAlbersUsa()" < us_zips.geojson > us_zips_albersUsa.json
I get errors stating "Error: Cannot create a string longer than 0x3fffffe7 characters"
Attempt 2: I used ogr2ogr (https://gdal.org/programs/ogr2ogr.html) to create the geojson file (instead of shp2json), then ran tried to run the same geoproject code as above and got the same error.
Attempt 3: I used ogr2ogr to create the geojson sequence file (instead of a geojson file), then ran geo2topo to create the topojson file from the geojsons file. While this succeeded in creating the topojson file, it still doesn't include the geoAlbersUsa projection in the resulting topojson file.
I get from the rather obtuse documentation of ogr2ogr that an output projection can be specified using -a_srs but I can't for the life of me figure out how to specify something that would get me the geoAlbersUsa projection. I found this reference https://spatialreference.org/ref/sr-org/44/ but I think that would get me the Albers and it may chop off Alaska and Hawaii, which is not what I want.
Any suggestions here? I was hoping I'd find a way to change the projection in the topojson file itself since that would avoid the excessively-long-string issue I seem to run into whenever I try to do anything in node that requires the use of the geojson file. It seems like possibly that was something that could be done in earlier versions of topojson (see Ways to project topojson?) but I don't see any way to do it now.
Not quite an answer, but more than a comment..
So, I Googled just "0x3fffffe7" and found this comment on a random Github/NodeJS project, and based on reading it, my gut feeling is that the node stuff, and/or the D3 stuff you're using is reducing your entire ZCTA-level shapefile down to ....a single string stored in memory!! That's not good for a continent-scale map with such granular detail.
Moreover, the person who left that comment suggested that the OP in that case would need a different approach to introduce their dataset to the client. (Which I suppose is a browser?) In your case, might it work if you query out each state's collection of zips into a single shapefile (ogr2ogr can do this using OGR-SQL), which would give you 5 different shapefiles. Then for each of these, run them through your conversions to get json/geoalbers. To test this concept, try exporting just one state and see if everything else works as expected.
That being said, I'm concerned that your approach to this project has an unworkable UI/architectural expectation: I just don't think you can put that much geodata in a browser DIV! How big is the DIV, full screen I hope?!?
My advice would be to think of a different way to present the data. For example an inset-DIV to "select your state", then clicking the state zooms the main DIV to a larger view of that state and simultaneously pulls down and randers that state's-specific ZCTA-level data using the 50 files you prepped using the strategy I mentioned above. Does that make sense?
Here's a quick example for how I expect you can apply the OGR_SQL to your scenario, adapt to fit:
ogr2ogr idaho_zcta.shp USA_zcta.shp -sql "SELECT * FROM USA_zcta WHERE STATE_NAME = 'ID'"
Parameters as follows:
idaho_zcta.shp < this is your new file
USA_zcta.shp < this is your source shapefile
-sql < this signals the OGR_SQL query expression
As for the query itself, a couple tips. First, wrap the whole query string in double-quotes. If something weird happens, try adding leading and trailing spaces to the start and end of your query, like..
" SELECT ... 'ID' "
It's odd I know, but I once had a situation where it only worked that way.
Second, relative to the query, the table name is the same as the shapefile name, only without the ".shp" file extension. I can't remember whether or not there is case-sensitivity between the shapefile name and the query string's table name. If you run into a problem, give the shapefile and all lowercase name and use lowercase in the SQL, too.
As for your projection conversion--you're on your own there. That geoAlbersUSA looks like it's not an industry standard (i.e EPSG-coded) and is D3-specific, intended exclusively for a browser. So ogr2ogr isn't going to handle it. But I agree with the strategy of converting the data in advance. Hopefully the conversion pipeline you already researched will work if you just have much smaller (i.e. state-scale) datasets to put through it.
Good luck.

Converting All Blocks to Lines and Text

When I receive a drawing, I wish to remove all definitions from previous drafters, such as blocks, styles, layers, groups, xrefs, etc. in order to retain only primitives: texts, lines and arcs, in summary, a single flat drawing.
This is a very routinary activity, and I've found many dissimilar answers through internet, often involving non-standard, non-canonical, combinations of the following commands:
LAYMRG, PURGE
AUDIT
SELECTSIMILAR
WBLOCK
EXPLODE, XPLODE
DIMSTYLE, BATTMAN
DXFOUT, WMFOUT, DXFIN, WMFIN
BURST
Unfortunately, after applying most them, the result sometimes retain many non-purgable objects, including:
Non-explodable blocks,
Dimensions with their own styles,
Blocks losing their text attributes (by XPLODE),
Changed fonts (by WMFOUT),
Do AutoCAD have some canonical way to do this?
I think it's not so easy. If there is such command, I don't know that, but...
In situation You described, You should attach drawing You get as External reference XRef . In that case, You can make such drawing displayed as darker or lighter, but without so many changes in drawing. Also if You get new version of such file, for example because Architect make some changes, You don't need to do anything, maybe only reload such file and new version is displayed.
You will have two separate files:
base, for example architecture
branch , for example electircal, HVAC, and so on. Your work.
Of corse You can think about some script (scr file of LISP) which will run all commands You want just by run one command. Create such script is not very complicated, but In my opinion it's easy and flexible enought to use XRef.

What is the best way to edit the middle of an existing flat file?

I have tool that creates variables for a simulation. The current workflow involves hand copying those variables into the simulation input file. The input file is a standard flat file, i.e. not binary or XML. I would like to automate the addition of the variables to the flat input file.
The variables copy over existing variables in the file, e.g.
New Variables:
Length 10
Height 20
Depth 30
Old Variables:
...
Weight 100
Age 20
Length 10
Height 20
Depth 30
...
Would like to have the old variables copy over the new variable. They are 200 lines into the flat input file.
Thanks for any insights.
P.S. This is on Windows.
If you're stuck using flat, then you're stuck using the old fashioned way of updating them: read from original, write to temp file, either write the original row or change the data and then write that. To add data, write it to the temp file at the appropriate point; to delete data, simply don't copy it from the original file.
Finally, close both files and rename the temp file to the original file name.
Alternatively, it might be time to think about a little database.
For something like this I'd be looking at a simple template engine. You'd have a base template with predefined marker tokens instead of variable values and then just pass the values required to your engine along with the template and it will spit out the resultant file, all present and correct. There are a number of Open Source template engines available in Java that would meet your needs, I imagine such things are also available in your language of choice. You could even roll your own without too much difficulty.
Note that under Unix, one would probably look at using mmap() because you can then use functions such as memmove() to move the data around and add new data or truncate() the result if the file is then smaller (you may also want to use truncate() to grow the file).
Under MS-Windows, you have the MapViewOfFileEx() function to do the same thing. The API is different, though,
and I'm not exactly sure what happens or how to grow/shrink the file (MSDN also includes a truncate()-like function and maybe that works).
Of course, it's important to use memcpy() or memmove() properly to not overwrite the wrong data or go outside the buffer.

Resources