Program for universal enveloping Lie algebra Su(3) in Mathematica - wolfram-mathematica

From information in internet, particular this link: http://forums.wolfram.com/mathgroup/archive/1999/Nov/msg00213.html
It was interseting problem of " How to do Lie algebra in Mathematica".
Now,our my small seminar(project) we have been problem Poincaré–Birkhoff–Witt theorem and apply to calculus universal enveloping lie algebra sl(3). My teacher asked us to write program in Mathematica to calculus cofficient and ingredient of 2 monomial of su(3) .Example :
Product of x^15*y^10*z^7 and x^4*y^5*z^6
Reamark: x,y,z be generate of universl enveloping algebra su(3) i.e:
lie bracket[x,z]=2x , [y,z]= -2y and [x,y]=z.
I just know this is very important problem for quantum mechanics and actually we are stuck at the moment, we have no ideal solve this problem. I feel in mathematica NonCommutativeMultiply function be very rudimentary.
Hope you could suggest or help us.

Related

Algorithmic solution to Minesweeper

I am trying to make the minesweeper solver. As you know there are 2 ways to determine which fields in minefield are safe to open, or to determine which fields are mined and you need to flag it. First way to determine is trivial and we have something like this:
if (number of mines around X – current number of discovered mines around X) = number of unopened fields around X then
All unopened fields around X are mined
if (number of mines around X == current number of discovered mines around X) then
All unopened fields around X are NOT mined
But my question is: What about situation when we can't find any mined or safe field and we need to look at more than 1 field?
http://img541.imageshack.us/img541/4339/10299095.png
For example this situation. We can't determine anything using previous method. So i need a help with algorithm for these cases.
I have to use A* algorithm to make this. That is why i need all possible safe states for next step in algorithm. When i find all possible safe states i will add them to the current shortest path and depending on heuristic function i will sort list of paths and choose next field that needs to be opened.
Awesome problem, before you get too excited though, please read NP Completeness and Minesweeper, as well as the accompanying presentation which develops some good worst case examples and how a human might solve them. Nevertheless, in expectation we most likely won't hit a time barrier, if we use basic pruning and heuristics.
The question of generating the game is asked here: Minesweeper solving algorithm. There is a very cool post on algebraic methods. You can also give backtracking a try (i.e. take a guess and see if that invalidates things), similar to the case where local information is not enough for something like sudoku. See this great discussion about this technique.
As #tigger said this is not a problem that can be solved with a simple set of rules. Minesweeper is a good example where backtracking algorithms such as DPLL is useful. With something as simple as propositional logic, you can implement a very efficient solver for minesweeper. I am not sure if you are familiar with AI reasoning & logic inference - If not, you might want to have a look at the book "Artificial Intelligence - A Modern Approach" by Stuart Russel and Peter Norvig. For quick reference of DPLL and propositional logic, search "wumpus world propositional logic" on Google.

Course Scheduling Algorithms: why use of DFS or Graph coloring is not suggested?

I need to develop a Course Timetabling software which can allot timeslots and rooms efficiently. This is a curriculum based routine, not post-enrollment based. And efficiently means classes are assigned timeslots according to staff time preferences and also need to minimize 1st year-2nd year class overlap so that 2nd year students can retake the courses they've failed to pass.(and also for 3rd-4th yr pair).
Now, at first i thought that would be an easy problem, but now it seems different. Most of the papers i've looked on uses Genetic Algorithm/PSO/Simulated Annealing or these type of algorithm. And i'm still unable to interpret the problem to a GA problem.
what i'm confused about is why almost none of them suggests DFS or Graph-coloring algorithm?
Can someone explain the scenario if DFS/graph-coloring is used? Or why they aren't suggested or tried.
My experience with solving this problem for a complex department, is that the hard constraints (like no overlapping of courses that are taken by the same population, and hard constraints of the teachers) are rather easily solvable by exact methods. I modeled the problem with 0-1 integer linear programming, and solved it with a SAT-based tool called minisat+. Competitive commercial tools like cplex can also solve it.
So with today's tools there is no need to approximate as suggested above, even when the input is rather large.
Now, optimizing the solution is a different story. There can be many (weighted) objectives, and finding the solution that brings the objective to minimum is indeed very hard computationally (no tool that I tried can solve it within 24 hours), but they reach near optimum in a few hours (I know it is near optimum because I can compute the theoretical bound on the solution).
This document describes applying a GA approach to university time-tabling, so it should be directly applicable to your requirement: Using a GA to solve university time-tabling

How do people prove the correctness of Computer Vision methods?

I'd like to pose a few abstract questions about computer vision research. I haven't quite been able to answer these questions by searching the web and reading papers.
How does someone know whether a computer vision algorithm is correct?
How do we define "correct" in the context of computer vision?
Do formal proofs play a role in understanding the correctness of computer vision algorithms?
A bit of background: I'm about to start my PhD in Computer Science. I enjoy designing fast parallel algorithms and proving the correctness of these algorithms. I've also used OpenCV from some class projects, though I don't have much formal training in computer vision.
I've been approached by a potential thesis advisor who works on designing faster and more scalable algorithms for computer vision (e.g. fast image segmentation). I'm trying to understand the common practices in solving computer vision problems.
You just don't prove them.
Instead of a formal proof, which is often impossible to do, you can test your algorithm on a set of testcases and compare the output with previously known algorithms or correct answers (for example when you recognize the text, you can generate a set of images where you know what the text says).
In practice, computer vision is more like an empirical science: You gather data, think of simple hypotheses that could explain some aspect of your data, then test those hypotheses. You usually don't have a clear definition of "correct" for high-level CV tasks like face recognition, so you can't prove correctness.
Low-level algorithms are a different matter, though: You usually have a clear, mathematical definition of "correct" here. For example if you'd invent an algorithm that can calculate a median filter or a morphological operation more efficiently than known algorithms or that can be parallelized better, you would of course have to prove it's correctness, just like any other algorithm.
It's also common to have certain requirements to a computer vision algorithm that can be formalized: For example, you might want your algorithm to be invariant to rotation and translation - these are properties that can be proven formally. It's also sometimes possible to create mathematical models of signal and noise, and design a filter that has the best possible signal to noise-ratio (IIRC the Wiener filter or the Canny edge detector were designed that way).
Many image processing/computer vision algorithms have some kind of "repeat until convergence" loop (e.g. snakes or Navier-Stokes inpainting and other PDE-based methods). You would at least try to prove that the algorithm converges for any input.
This is my personal opinion, so take it for what it's worth.
You can't prove the correctness of most of the Computer Vision methods right now. I consider most of the current methods some kind of "recipe" where ingredients are thrown down until the "result" is good enough. Can you prove that a brownie cake is correct?
It is a bit similar in a way to how machine learning evolved. At first, people did neural networks, but it was just a big "soup" that happened to work more or less. It worked sometimes, didn't on other cases, and no one really knew why. Then statistical learning (through Vapnik among others) kicked in, with some real mathematical backup. You could prove that you had the unique hyperplane that minimized a particular loss function, PCA gives you the closest matrix of fixed rank to a given matrix (considering the Frobenius norm I believe), etc...
Now, there are still a few things that are "correct" in computer vision, but they are pretty limited. What comes to my mind is the wavelet : they are the sparsest representation in an orthogonal basis of function. (i.e : the most compressed way to represent an approximation of an image with minimal error)
Computer Vision algorithms are not like theorems which you can prove, they usually try to interpret the image data into the terms which are more understandable to us humans. Like face recognition, motion detection, video surveillance etc. The exact correctness is not calculable, like in the case of image compression algorithms where you can easily find the result by the size of the images.
The most common methods used to show the results in Computer Vision methods(especially classification problems) are the graphs of precision Vs recall, accuracy Vs false positives. These are measured on standard databases available on various sites. Usually the harsher you set the parameters for correct detection, the more false positives you generate. The typical practice is to choose the point from the graph according to your requirement of 'how many false positives are tolerable for the application'.

Nesting maximum amount of shapes on a surface

In industry, there is often a problem where you need to calculate the most efficient use of material, be it fabric, wood, metal etc. So the starting point is X amount of shapes of given dimensions, made out of polygons and/or curved lines, and target is another polygon of given dimensions.
I assume many of the current CAM suites implement this, but having no experience using them or of their internals, what kind of computational algorithm is used to find the most efficient use of space? Can someone point me to a book or other reference that discusses this topic?
After Andrew in his answer pointed me to the right direction and named the problem for me, I decided to dump my research results here in a separate answer.
This is indeed a packing problem, and to be more precise, it is a nesting problem. The problem is mathematically NP-hard, and thus the algorithms currently in use are heuristic approaches. There does not seem to be any solutions that would solve the problem in linear time, except for trivial problem sets. Solving complex problems takes from minutes to hours with current hardware, if you want to achieve a solution with good material utilization. There are tens of commercial software solutions that offer nesting of shapes, but I was not able to locate any open source solutions, so there are no real examples where one could see the algorithms actually implemented.
Excellent description of the nesting and strip nesting problem with historical solutions can be found in a paper written by Benny Kjær Nielsen of University of Copenhagen (Nielsen).
General approach seems to be to mix and use multiple known algorithms in order to find the best nesting solution. These algorithms include (Guided / Iterated) Local Search, Fast Neighborhood Search that is based on No-Fit Polygon, and Jostling Heuristics. I found a great paper on this subject with pictures of how the algorithms work. It also had benchmarks of the different software implementations so far. This paper was presented at the International Symposium on Scheduling 2006 by S. Umetani et al (Umetani).
A relatively new and possibly the best approach to date is based on Hybrid Genetic Algorithm (HGA), a hybrid consisting of simulated annealing and genetic algorithm that has been described by Wu Qingming et al of Wuhan University (Quanming). They have implemented this by using Visual Studio, SQL database and genetic algorithm optimization toolbox (GAOT) in MatLab.
You are referring to a well known computer science domain of packing, for which there are a variety of problems defined and research done, for both 2-dimnensional space as well as 3-dimensional space.
There is considerable material on the net available for the defined problems, but to find it you knid of have to know the name of the problem to search for.
Some packages might well adopt a heuristic appraoch (which I suspect they will) and some might go to the lengths of calculating all the possibilities to get the absolute right answer.
http://en.wikipedia.org/wiki/Packing_problem

Travelling Salesman Problem Constraint Representation

I read a couple of articles and sample code about how to solve TSP with Genetic Algorithms and Ant Colony Optimization etc. But everything I found didn't include time (window) constraints, eg. "I have to be at customer x before 12am)" and assumed symmetry.
Can somebody point me into the direction of some sample code or articles that explain how I can add constraints to TSP and how I can represent those in code.
Thanks!
Professor Reinelt at university of heidelburg in germany is one of the leading experts for the TSP. He has a collection of papers on the various variants of the TSP.
see http://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/
I think your variant is called Vehicle Routing Problem with Time Windows. ( http://en.wikipedia.org/wiki/Vehicle_routing_problem )
You should take a look on what the state-of-the-art on Domain Independent Planning can do for you: http://ipc.informatik.uni-freiburg.de/

Resources