Neighborhoods in MetPy - nearest-neighbor

Does MetPy have a built in tool to deal with neighborhood functions applied to a meteorological parameter grids, such as fractional coverage within some radius of influence? I had constructed something with GEMPAK several years ago but didn't know if that sort of feature existed elsewhere. Thanks!

The only things that sound remotely like that are the implementations of Barnes and Cressman interpolation, as well as some smoothing functions. That doesn't sound like what you really want. Feel free to open a feature request, though.

Related

Looking for a reasonable way to have several shapes as usable buttons in processing

I recently (5 weeks ago) started my first school year in an school based apprenticeship to become an IT assistant.
We're learning programming and are starting with very basic processing things, while the ultimate plan is to get into C#.
Now I understand that processing might not be the best language for my little project but I still would like to work this out somehow.
What I want to build is a "Stargate Dial Computer". If you know the TV Show you'll know what I'm talking about.
I wanted to make it visually appealing so I decided to use one of the available tools to create my shapes as I am using a DHD (term from the show) for the dial process - see picture: https://i.imgur.com/r7jBjRG.png
This small shape setup already is over 500 lines of code and that seems unwise in itself. Besides that, the plan is to have every single of these trapezoids be a pushable button - but to achieve that manually I'd have to check their coordinates against the mouse collision to utilize them as buttons.
What I'm asking for now is any input on how to work with these shapes in a logical way to make my Idea even possible.
Something like, checking for the shape's color instead of the shape's coordinates itself like 40 times and getting the "active" shape's size in some kind of function. Or a way to just get every shape one by one in a loop, checking for every beginShape and endShape instance if that wouldn't be a performance nightmare.
Keep in mind that I am a beginner. I do know the basics, also of other languages, and I can apply some programming logic here and there - but since I'm not sure what processing can and can't do (yet) I'm looking for an answer to the question if this is even reasonable or possible, or not.
Any help and ideas would be much appreciated!
Thanks!

Mathimatical plotting

I want to make an application that plots mathematical functions, I'd like to know the best language for it. it should have the following features:
An area to draw the function.
Supports anti-aliasing.
A scroll bar to change other dependent variables (which is a in y=(x-a)*x).
It should be fast enough (calculations will be done hundreds of times).
Parsing mathematical expressions using regex (Is there a better way?).
any other suggestions would be useful.
edit: this can be useful in many ways such as discarding repeated calculations
ex: plotting y=4+1 using 1000 points have 999 repeated calculation, performance can be enhanced using a tree model that recalculates nodes with changed children only
Regex will not do for parsing math expressions.
Personally, I write recursive-descent parsers. You might be surprised how easy and flexible it is.
If you want the output to look like it's varying continuously, when it isn't actually, what I do is not paint to the output window.
Rather I paint to a memory bitmap, which I then block-transfer to the visible window.
This eliminates all flashing, and makes it look fast even if it's only actually being repainted a few times per second.
Remember, your time-hog is much more likely to be painting, not calculating, so don't waste time trying to figure out how to optimize the calculation.
As far as a "best language", it depends what you're trying to do.
I've done all this in C, C++, and C#.
I'm sure Java or other compiled languages would work just as well.
I think there isn't a "best language" for it, however I can give you some hints. I think one way would be to use C++ with gnuplot library. Another way would be to use C++ with Qt and qwt libraries. Qt will easily manage regex too.
The latest is a solution I've personally used in my past work and there aren't particular problems, while the first is only a theoretic idea.

How can avoid people using my code for evil? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm not sure if this is quite the right place, but it seems like a decent place to ask.
My current job involves manual analysis of large data sets (at several levels, each more refined and done by increasingly experienced analysts). About a year ago, I started developing some utilities to track analyst performance by comparing results at earlier levels to final levels. At first, this worked quite well - we used it in-shop as a simple indicator to help focus training efforts and do a better job overall.
Recently though, the results have been taken out of context and used in a way I never intended. It seems management (one person in particular) has started using the results of these tools to directly affect EPR's (enlisted performance reports - \ it's an air force thing, but I assume something similar exists in other areas) and similar paperwork. The problem isn't who is using these results, but how. I've made it clear to everyone that the results are, quite simply, error-prone.
There are numerous unavoidable obstacles to generating this data, which I have worked to minimize with some nifty heuristics and such. Taken in the proper context, they're a useful tool. Out of context however, as they are now being used, they do more harm than good.
The manager(s) in question are taking the results as literal indicators of whether an analyst is performing well or poorly. The results are being averaged and individual scores are being ranked as above (good) or below (bad) average. This is being done with no regard for inherent margins of error and sample bias, with no regard for any sort of proper interpretation. I know of at least one person whose performance rating was marked down for an 'accuracy percentage' less than one percentage point below average (when the typical margin of error from the calculation method alone is around two to three percent).
I'm in the process of writing a formal report on the errors present in the system ("Beginner's Guide to Meaningful Statistical Analysis" included), but all signs point to this having no effect.
Short of deliberately breaking the tools (a route I'd prefer avoiding but am strongly considering under the circumstances), I'm wondering if anyone here has effectively dealt with similar situations before? Any insight into how to approach this would be greatly appreciated.
Update:
Thanks for the responses - plenty of good ideas all around.
If anyone is curious, I'm moving in the direction of 'refine, educate, and take control of interpretation'. I've started rebuilding my tools to try and negate or track error better and automatically generate any numbers and graphs they could want, with included documentation throughout (while hiding away as obscure references the raw data they currently seem so eager to import to the 'magical' excel sheets).
In particular, I'm hopeful that visual representations of error and properly created ranking systems (taking into account error, standard deviations, etc.) will help the situation.
Either modify the output to include error information (so if the error is +/- 5 %, don't output 22%, output 17% - 27%), or educate those whom this is being used against to the error so that they can defend themselves when it is used against them.
Well, you seem to have run afoul of the Law of Unintended Consequences in the context of human behavior.
Unfortunately, once the cat is out of the bag, it's pretty hard to put back in. You have a few options (which are not mutually exclusive, by the way) to consider, including:
Alter the reports so that their data can no longer be abused in the way you describe.
Work with management to help them understand why their use of your data is improper or misleading.
Work with those whose performance is being measured to pressure management to rethink their policy on the matter.
Work with management/analysts to come up with a viable means to measure performance in a way that is fair to everyone.
Break the report in a manner that makes them unusable for any purposes.
Clearly there is a desire on the part of management to get analytics on performance of analysts. Likely there is a real need for this ... and your reports happened to fill a void in the available information. The best option for everyone would be to find a way to effectively and fairly fill this need. There are many possible ways to achieve this - from dropping dense rankings in favor of performance tiers to using time-over-time variance to refine performance measurements.
Now, it's entirely possible that the existing reports you've provided simply cannot be applied in a fair and accurate manner to address this problem. In which case, you should work with your management team to make sure they understand why this is the case - and either redefine the way performance is measured or take the time to develop an appropriate and fair methodology.
One of the strongest means to convince management that their (ab)use of the data in your report is unwise is to remind them of the concept of perverse incentives. It's entirely possible that over time, analysts will modify their behavior in a way that results in higher rankings in performance reports at the cost of real performance or quality of results that are not otherwise captured or expressed. You seem to have a good understanding of your domain - so I would hope that you could provide specific and dramatic examples of such consequences to help make your case.
All you can do is to try and educate the managers as to why what they're doing is incorrect.
Beyond that, you can't stop idiots from being idiotic, and you'll just go mad trying.
I definitely wouldn't "break" code that people are relying on, even if it's not a specific deliverable. That will only cause them to complain about you, a move which may affect your own EPR :-)
I really think the key here is good communication with your managers.
Besides, I like PatrickV's idea. You could also try some other ways to engineer your tool around the problem so that it'll seem silly/be hard to use it as performance measurement - change the name of the statistics to mean something other than "how good programmer X is", make it hard to get data per-person, show error statistics.
You can also try to display the data in another way (this may actually make your managers think you are trying to help them). Show a graph - a several pixels difference in position may be harder to identify than a numeric results (my guess - your managers are using excel and coloring red everything below average). Draw the error margin so it doesn't make sense to obsess over fractions of percentages.
Give the result as a scale - low and high margin that take into account your error information, it is harder to compare.
Edit: Oh yeah, and read about "social interfaces". You can start with's Spolsky's Not Just Usability and Building Communities with Software.
I would echo #paxdiablo's advice, as a first step:
Work on the report on the inherent errors. In fact, make it the introduction to every copy generated.
When you refer to the measurement errors, indicate they are the lower limit of the errors (unless there actually aren't any).
Try to educate the manager(s) in the error of his/her ways.
If possible, discuss the issue with your manager. And perhaps with the offending managers' management, depending on how familiar you are with them you probably limit it to just "express some concerns" and giving a heads-up.
Consult your HR department, or whomever is in charge of fairness in the performance reviews.
Good luck.
The problem is that the code is not yours, it belongs to your company. They really can do whatever they want with it.
I hate to say this, but if you have an issue with the ethics of your company you will have to leave that company.
One thing you could do is implement the comparison yourself. If he really wants to check if somebody is performing significantly less than the rest, it should be tested formally as well.
Now to choose the right test is a bit tricky without knowing the data and the structure, so I can't really advise you on that one. Just take into account that if you do pairwise comparisons, or compare multiple scores against an average, that you run into the multitesting problem. A classic way of correcting is using Bonferroni. If you implement that one, you can be sure that at a certain point, noone will jump out any more. The Bonferroni correction is very conservative. Another option is using Dunn-Sidak, which is supposed to be less conservative.
The correct implementation would be an ANOVA -if the assumptions are met and the data suitable off course- with a post-hoc comparison like a Tukey Honest Significant Difference test. That way at least the uncertainty on the results is taken into account.
If you don't have a clue on which test to use, describe your data in detail on stats.stackexchange.com and ask for help on which test to use.
Cheers
I just wanted to elaborate on the Perverse Incentives answer of LBushkin. I can easily see your problem extending to where analysts will avoid difficult topics for fear of reducing their score. Or maybe they will provide the same answer as earlier stages to avoid hurting a friends score, even if that is not correct. An interesting question is what happens if the later answer is incorrect - you have no truth, just successive analytic opinions - in this case I assume the first answer is marked as "incorrect", right?
Maybe presenting some of these extensions to the manager will help.

visualising piano performance evaluation

I need to develop a performance evaluator for piano playing. Based on a midi generated from sheet music, I need to evaluate the midi of the actual playing (midi keyboard). I'm planning to evaluate the playing based on note pitch, duration and loudness. The evaluation is I suppose a comparison of the notes of the sheet music and playing in midi.
But I have no idea how I can visualise (i.e. show where the person have gone wrong) this evaluation process. i.e. maybe show both the notation and highlight which note has gone wrong.
But how can I show any of this in some graphical form? Or more precisely on a stave (a music score) itself. I have note details (pitch, duration) and score details (key and time signature) stored in a table, and I'm using Java. But I have no clue as in how I can put all this into graphical form.
Any insight is most gratefully appreciated. Advance thanks
What you're talking about, really, is a graphical diff tool for musical notation. I think the easiest way to show differences is with an overlay of played notes (and rests) over "correct" score symbols. Where it gets tricky is in showing volume differences, whether notes are played (or should be played) staccato, marcato, tenuto, etc. For example, a note with a dot over it is meant to be played staccato, but your MIDI representation of a quarter note might be interpreted as an eight note followed by an eight rest, etc.
You'll also have to quantize the results of the live play, which means you will have to allow some leeway for the human being to be slightly before or after the beat without notating differently. If you don't do this, the only "correct" interpretation of the notes will be very mechanical (and not pleasing to the ear).
As for drawing the notation and placing it on the correct lines or spaces on the staves, that is not hard if you understand how to draw graphics. There are musical fonts available that permit you to use alphanumeric characters to represent note bodies, stems, rests, etc. you will also have to understand key signatures, accidentals, when certain notes are enharmonic, etc.
This is not a small undertaking you are proposing, and there is already a lot of software out there that does a lot of what you are trying to do. Maybe some exists that does exactly what you want to do, so do research it before you start coding. :) Look at various work that has already been done and see if there is anything you can use or which would put you off your project.
I made my own keyboard player/recorder for QuickTime's MIDI implementation a few years ago and had to solve a number of the problems you face. I did it for fun, and it was fun (and educational for me), but it could never compete with commercial software in the genre. And although people did enjoy it, I really did not have time to maintain it and add the features people wanted, so eventually I had to abandon it. This kind of thing is really a lot of work.

3d modeling for data structures

I'm looking for a 3D modeling/animation software. Honestly, I don't know if this is something achievable - but what I want to have is some kind of visual representation of various ideas.
Speaking in future tense: if I were to read about of the boot process of an OS, I would visualize the various data structures building up; and I can step through the process with a sliding bar or so. If I were to think about a complex data structure, I would have a 3D representation of various links and relations between them. Another would be a Git repository at work - how commits/trees/blobs are linked in space, and how they progress as time passes. And all of these would be interactive.
The reason why I want to do this is that it'd be very easy to explain the process. Not just to others, but also to self. I can revisit my model, and it'd be a quick brush up.
I'm sure there are no ready-to-use softwares for this. What I could think of are Flash, with action scripting, or Blender 3D (Python scripting?); or Synfig. Whatever it's, I've to learn up start; and I'm looking for suggestions as to which (even if not in my list) is the right one to choose.
Thanks
I've used Blender, but it requires a large upfront investment of time, especially to learn the UI. Blender is all about the hotkeys. Once you have them memorized, it's great. But getting there takes a while.
Alice might be worth a look. It looks easy to use and supports scripting.
There are many tools available for 3D modeling. I'm a fan of 3D Studio max. But there is Blender, Maya, and truespace.
You may want to take a look at the field of visualization to help with illustrating your message.
I suspect that packages such as 3D Studio Max and Blender are too powerful, in the sense that your relatively simple requirements will force you on too long a learning path. Try Googling for Data Structure Animations to get an idea of what others have used. Also, head over to Information Aesthetics, they recently featured a tool for visualising commits and checkouts to/from repositories and similar.
My favourite is nearly the Lego Designer, very good for 3D block animations, but so far I haven't figured out how to add text to the blocks.

Resources