Sparse Matrix Editing - matrix

I'm new here so I'm not sure if it's asked before, but I did look out to see if it's there.
I'm interested if anyone has encountered similar problem. I have sparse matrix that is being LU decomposed and than those L and U factors are than inverted. Now the problem I encounter is following. The original sparse matrix requires editing because of input data, and in some cases (I know why) it becomes singular. The solution for that is simple, I will remove row and column for those elements that made it singular, and continue with my code, but is there a way to edit LU factors that are inverted or I have to create new ones every time? It consumes a lot of time, since number of nonzero elements is like 10K or more.

Related

How can I determine whether two simple graphs are isomorphic using matrices?

I think you have to swap between columns and again for rows but it didn't work so does any one have any solution?
Note: one of the matrices is static and the other variable.
It is unknown whether this problem is in the class P. Thus it is extremely unlikely you will find a reasonable-time solution for it. If you do, you should publish it.
Given that, you can basically just go over all permutations of rows and columns for one of the matrices, and check to see if the result of any is the second matrix.

pseudo diagonalize adjacency matrix

Given an adjanceny matrix that is rather sparse, meaning there are a lot of zero entries, I would like to do the following: I would like to change the order of the rows and columns of the matrix such that the matrix has the non-zero entries as close as possible to the diagonal line. Then I would get some kind of pseudo diagonal matrix.
I would like to whether there are known algorithm for doing exactly that. After having this pseudo diagonal matrix I that ideally think must be a metric for "how diagonal" we could get.
The reason for doing so is that afterwards I would be able to store the matrix in a much small data-structure which would be faster to store and load.
My own research has shown me that I might not know the correct terminology for the problem so I would be happy to learn about the correct wording of the given problem. And of course get to know algorithms that can do the "pseudo diagonalisation" by changing the order of rows and columns in a matrix.

Placing a matrix in echelon form with the Eigen library

This post concerns very short, wide arrays (# columns can be several orders of magnitude larger than the number of rows).
Due to the disparity in row/column number and the large size of the matrices I work with, it's usually infeasible to hold the U part of an LU decomposition in memory. Does Eigen have functionality to compute just the L? Equivalently, to place the input matrix in echelon form using row operations?
General notes
(1) I saw a related question here
https://forum.kde.org/viewtopic.php?f=74&t=138686&p=371097&hilit=echelon#p371097
The answer suggested looking at the image() method under FullPivLU, but I wasn't able to find the necessary information in the docs. In particular, it's often important to obtain the matrix L, in practice. An arbitrary basis for the column space of the matrix does not suffice.
(2) There was another question here
https://forum.kde.org/viewtopic.php?f=74&t=130430&p=348923&hilit=echelon#p348923
but it did not seem to get a response.
(3) Issues of stability are of less concern in the (fairly specialized) application domain that motivates this question, since we usually work over finite fields.
Thanks!

Defining a special case of a subset-sum with complications

I have a problem that I have a number of questions about. First, I'm mostly looking for help describing and understanding the problem at hand. Solutions are always welcome, but most importantly I could use some advice from someone more experienced than I. Now, to the problem at hand:
I have a set of orders that each require some number of items. I also have several groupings of items that each contain some number of some items (call them groups). The goal is to find a subset of the orders that can be fulfilled using as few groups as possible and where the total number of items contained within the orders is between n and N.
Edit: The constraints on the number of items contained in the orders (n and N) are chosen independently.
To me at least, that's a really complicated way of saying the problem so I've been trying to re-phrase it as a knapsack problem (I suspect this might reduce to a subset-sum). To help my conceptual understanding of this I've started using the following definitions:
First, lets say that a dimension exists for each possible item, and somethings 'length' in that dimension is the number of that particular type of item it either has or requires.
From this, an order becomes an 'n-dimensional object' where its value in each dimension corresponds to the number of that item that it requires.
In addition, a group can be seen as an 'n-dimensional box' that has space in each dimension corresponding to the number of items it provides.
An objects value is equal to the sum of its length in all dimensions.
Boxes can be combined.
Given the above I've rephrased the problem to this:
What is the smallest combination of boxes that can hold a combination of items with value between n and N.
Question #1: Is this a correct/useful way to express the problem? Does it seem like I've missed anything obvious?
As I see it, since there are two combinations that I'm looking for I need to break the problem into two parts. So far I think breaking the problem up like this is a good step:
How many objects can box (or combination of boxes) X hold?
Check all (or preferably some small subset of) the possible combinations of boxes and pick the 'best'.
That makes it a little more manageable, but I'm still struggling with the details.
Question #2: Solved To solve the first part I think it's appropriate to say that the cost of an object is equal to the sum of its length in all dimensions, so is it's value. That places me into a subset-sum problem, right? Obviously it's a special case, but does this problem have a name?
Question #3: Solved I've been looking into subset-sum solutions a lot, but I don't understand how to apply them to something like this in multiple dimensions. I assume it's been done before, but I'm unsure where to start my research. Could someone either describe the principles at work or point me in a research direction?
Edit: After looking at everyone's feedback and digging into the terms I think I've found a good algorithm I can implement to solve part 1. Since I will have a very large number of dimensions compared to the number of items it looks like using a 'primal effective capacity heuristic (PECH)' will be a good fit. I'd be interested in hearing someones thoughts about it if they have experience with such an algorithm.
Question #4: For the second part, performance is a concern and I doubt it will be realistic to brute force it. So I intend to treat all combinations of boxes as a really big tree of solutions. The idea is to compute part 1 for all combinations of M-1 boxes where M is the total number of boxes. Somehow determine the 'best' couple box combinations from that set and do the same to their child nodes on the tree. Does this sound like it would help me arrive at something close to optimal? How would I choose the 'best' box combinations?
Thanks for reading! Suggestions for edits and clarifications are welcome.

incremental least squares differing with only one row

I have to solve multiple least squares problem sequentially - that is one by one. Every least square problem from the previous one changes by only one row. The right hand side is same for all. For eg., Problem 1 : ||Ax-b|| and Problem 2 : ||Cy-b|| where C and A changes by only one row. That is, it is equivalent to deleting a row from A and including a new row in A. When solving problem 2, I also have x. Is there a fast way for solving y of Problem 2?
You can use the Sherman-Morrison formula.
The key piece of the linear regression solution is computing the inverse of A'A.
If b is the old row from A and a is the new row in C, then
C'C=A'A-bb'+aa'=A'A+(a-b)(a+b)'
This expression can be plugged into the Sherman-Morrison formula to compute (C'C)^{-1} given (A'A)^{-1}.
Unfortunately the answer may be NO...
Changing one row of a matrix will lead to completely different spectrum of the matrix. All the eigenvalues and eigenvectors are changed with both magnitude and orientation. As a result, the gradient of problem 1 won't remain in problem 2. You can try to use your x from problem 1 as a initial guess for y in problem 2, but it is not guaranteed to reduce your searching time in optimization.
Yet a linear matrix equation solving is not that hard with the powerful packages. You can use LU decomposition or QR decomposition to improve the computing efficiency very much.

Resources