Related
I'm trying to get a single element of an adjugate A_adj of a matrix A, both of which need to be symbolic expressions, where the symbols x_i are binary and the matrix A is symmetric and sparse. Python's sympy works great for small problems:
from sympy import zeros, symbols
size = 4
A = zeros(size,size)
x_i = [x for x in symbols(f'x0:{size}')]
for i in range(size-1):
A[i,i] += 0.5*x_i[i]
A[i+1,i+1] += 0.5*x_i[i]
A[i,i+1] = A[i+1,i] = -0.3*(i+1)*x_i[i]
A_adj_0 = A[1:,1:].det()
A_adj_0
This calculates the first element A_adj_0 of the cofactor matrix (which is the corresponding minor) and correctly gives me 0.125x_0x_1x_2 - 0.28x_2x_2^2 - 0.055x_1^2x_2 - 0.28x_1x_2^2, which is the expression I need, but there are two issues:
This is completely unfeasible for larger matrices (I need this for sizes of ~100).
The x_i are binary variables (i.e. either 0 or 1) and there seems to be no way for sympy to simplify expressions of binary variables, i.e. simplifying polynomials x_i^n = x_i.
The first issue can be partly addressed by instead solving a linear equation system Ay = b, where b is set to the first basis vector [1, 0, 0, 0], such that y is the first column of the inverse of A. The first entry of y is the first element of the inverse of A:
b = zeros(size,1)
b[0] = 1
y = A.LUsolve(b)
s = {x_i[i]: 1 for i in range(size)}
print(y[0].subs(s) * A.subs(s).det())
print(A_adj_0.subs(s))
The problem here is that the expression for the first element of y is extremely complicated, even after using simplify() and so on. It would be a very simple expression with simplification of binary expressions as mentioned in point 2 above. It's a faster method, but still unfeasible for larger matrices.
This boils down to my actual question:
Is there an efficient way to compute a single element of the adjugate of a sparse and symmetric symbolic matrix, where the symbols are binary values?
I'm open to using other software as well.
Addendum 1:
It seems simplifying binary expressions in sympy is possible with a simple custom substitution which I wasn't aware of:
A_subs = A_adj_0
for i in range(size):
A_subs = A_subs.subs(x_i[i]*x_i[i], x_i[i])
A_subs
You should make sure to use Rational rather than floats in sympy so S(1)/2 or Rational(1, 2) rather than 0.5.
There is a new (undocumented and for the moment internal) implementation of matrices in sympy called DomainMatrix. It is likely to be a lot faster for a problem like this and always produces polynomial results in a fully expanded form. I expect that it will be much faster for this kind of problem but it still seems to be fairly slow for this because is is not sparse internally (yet - that will probably change in the next release) and it does not take advantage of the simplification from the symbols being binary-valued. It can be made to work over GF(2) but not with symbols that are assumed to be in GF(2) which is something different.
In case it is helpful though this is how you would use it in sympy 1.7.1:
from sympy import zeros, symbols, Rational
from sympy.polys.domainmatrix import DomainMatrix
size = 10
A = zeros(size,size)
x_i = [x for x in symbols(f'x0:{size}')]
for i in range(size-1):
A[i,i] += Rational(1, 2)*x_i[i]
A[i+1,i+1] += Rational(1, 2)*x_i[i]
A[i,i+1] = A[i+1,i] = -Rational(3, 10)*(i+1)*x_i[i]
# Convert to DomainMatrix:
dM = DomainMatrix.from_list_sympy(size-1, size-1, A[1:, 1:].tolist())
# Compute determinant and convert back to normal sympy expression:
# Could also use dM.det().as_expr() although it might be slower
A_adj_0 = dM.charpoly()[-1].as_expr()
# Reduce powers:
A_adj_0 = A_adj_0.replace(lambda e: e.is_Pow, lambda e: e.args[0])
print(A_adj_0)
I'd like to numerically solve an equation involving a MatrixSymbol. Here's a basic example:
import sympy as sy
v = sy.MatrixSymbol('v', 2, 1)
equation = (v - sy.Matrix([17, 23])).as_explicit()
I'd like something like:
sy.nsolve(equation, v, sy.Matrix([0,0]))
But because nsolve does not accept MatrixSymbols, I've made a cludgy workaround that gives the correct output of Matrix([[17.0], [23.0]]):
vx, vy = sy.symbols('v_x v_y')
sy.nsolve(equation.subs(v, sy.Matrix([vx, vy])), [vx, vy], [0,0])
Essentially, I've converted a MatrixSymbol to a matrix of Symbols to make nsolve happy.
Is there a better way I should be doing this?
Edit: the workaround can be simplified to:
vseq = sy.symbols('a b') #names must be distinct
sy.nsolve(equation.subs(v, sy.Matrix(vseq)), vseq, [0,0])
But there ought to be a cleaner way to convert a MatrixSymbol to a sequence of Symbols, or a way to avoid needing to do so in the first place.
A cleaner way is to create a Matrix from symarray:
v = sy.Matrix(sy.symarray("v", (2,)))
equation = v - sy.Matrix([17, 23])
sy.nsolve(equation, v, [0, 0])
Here, symarray creates a (NumPy) array of symbols [v_0, v_1] which is then turned into a Matrix. One can also use sy.symarray("v", (2, 1)) so it's a double array, but since SymPy's Matrix constructor is cool with 1D inputs, this is not necessary.
In maxima I have calculated a complicated 2x2 matrix, let's call it "HUGE" and it has 8 variables in it:
a, b, c, d, tr11, tr12, tr21, tr22
This is not correct syntax, but just to illustrate:
Given
HUGE(a,b,c,d,tr11,tr12,tr21,tr22)=matrix([0,0],[0,0])
I want to factorize/transform it into
HUGE_NEW(a,b,c,d)=matrix([tr11,tr21],[tr12,tr22])
By writing things like HUGE_NEW(a,b,c,d) I mean a new matrix that contains variables a,b,c,d only.
In other words I want to say something like
solve (HUGE - matrix([tr11,tr21],[tr12,tr22]) = matrix([0,0],[0,0]), [a, b, c, d], [tr11, tr12, tr21, tr22])
How can I go about it? My matrix is really complicated, so doing it with pen and paper is not an option.
In case someone asks for the actual value of my HUGE-matrix to try it out:
HUGE: matrix([−(sqrt(d^2−2*a*d+4*b*c+a^2)*((b*d+a*b)*tr12*tr22+(−d^2−a*d)*tr12*tr21+(−b*d−a*b)*tr11*tr12+(d^2+a*d)*tr11^2)+(b*d^2+2*b^2*c+a^2*b)*tr12*tr22+((−2*b*c−a^2)*d−d^3)*tr12*tr21+(b*d^2+2*b^2*c+a^2*b)*tr11*tr12+((−2*b*c−a^2)*d−d^3)*tr11^2)/(2*a*d−2*b*c),−(sqrt(d^2−2*a*d+4*b*c+a^2)*((b*d+a*b)*tr22^2+(−d^2−a*d)*tr21*tr22+((−b*d−a*b)*tr12+(d^2+a*d)*tr11)*tr21)+(b*d^2+2*b^2*c+a^2*b)*tr22^2+((−2*b*c−a^2)*d−d^3)*tr21*tr22+((b*d^2+2*b^2*c+a^2*b)*tr12+((−2*b*c−a^2)*d−d^3)*tr11)*tr21)/(2*a*d−2*b*c)],[(sqrt(d^2−2*a*d+4*b*c+a^2)*((a*d+a^2)*tr12*tr22+(−c*d−a*c)*tr12*tr21+(−a*d−a^2)*tr11*tr12+(c*d+a*c)*tr11^2)+(a*d^2+2*a*b*c+a^3)*tr12*tr22+(−c*d^2−2*b*c^2−a^2*c)*tr12*tr21+(a*d^2+2*a*b*c+a^3)*tr11*tr12+(−c*d^2−2*b*c^2−a^2*c)*tr11^2)/(2*a*d−2*b*c),(sqrt(d^2−2*a*d+4*b*c+a^2)*((a*d+a^2)*tr22^2+(−c*d−a*c)*tr21*tr22+((−a*d−a^2)*tr12+(c*d+a*c)*tr11)*tr21)+(a*d^2+2*a*b*c+a^3)*tr22^2+(−c*d^2−2*b*c^2−a^2*c)*tr21*tr22+((a*d^2+2*a*b*c+a^3)*tr12+(−c*d^2−2*b*c^2−a^2*c)*tr11)*tr21)/(2*a*d−2*b*c)])
Edit:
When I do this:
solve(HUGE = matrix([0,0],[0,0]),[tr11,tr12,tr21,tr11]);
maxima returns:
[]
I also tried:
solve([
−(sqrt(d^2−2*a*d+4*b*c+a^2)*((b*d+a*b)*tr12*tr22+(−d^2−a*d)*tr12*tr21+(−b*d−a*b)*tr11*tr12+(d^2+a*d)*tr11^2)+(b*d^2+2*b^2*c+a^2*b)*tr12*tr22+((−2*b*c−a^2)*d−d^3)*tr12*tr21+(b*d^2+2*b^2*c+a^2*b)*tr11*tr12+((−2*b*c−a^2)*d−d^3)*tr11^2+2*a^2*d−2*a*b*c)/(2*a*d−2*b*c)=0,
−(sqrt(d^2−2*a*d+4*b*c+a^2)*((b*d+a*b)*tr22^2+(−d^2−a*d)*tr21*tr22+((−b*d−a*b)*tr12+(d^2+a*d)*tr11)*tr21)+(b*d^2+2*b^2*c+a^2*b)*tr22^2+((−2*b*c−a^2)*d−d^3)*tr21*tr22+((b*d^2+2*b^2*c+a^2*b)*tr12+((−2*b*c−a^2)*d−d^3)*tr11)*tr21+2*a*c*d−2*b*c^2)/(2*a*d−2*b*c)=0,
(sqrt(d^2−2*a*d+4*b*c+a^2)*((a*d+a^2)*tr12*tr22+(−c*d−a*c)*tr12*tr21+(−a*d−a^2)*tr11*tr12+(c*d+a*c)*tr11^2)+(a*d^2+2*a*b*c+a^3)*tr12*tr22+(−c*d^2−2*b*c^2−a^2*c)*tr12*tr21+(a*d^2+2*a*b*c+a^3)*tr11*tr12+(−c*d^2−2*b*c^2−a^2*c)*tr11^2−2*a*b*d+2*b^2*c)/(2*a*d−2*b*c)=0,
(sqrt(d^2−2*a*d+4*b*c+a^2)*((a*d+a^2)*tr22^2+(−c*d−a*c)*tr21*tr22+((−a*d−a^2)*tr12+(c*d+a*c)*tr11)*tr21)+(a*d^2+2*a*b*c+a^3)*tr22^2+(−c*d^2−2*b*c^2−a^2*c)*tr21*tr22+((a*d^2+2*a*b*c+a^3)*tr12+(−c*d^2−2*b*c^2−a^2*c)*tr11)*tr21−2*a*d^2+2*b*c*d)/(2*a*d−2*b*c)=0
],[tr11,tr12,tr21,tr22]);
which results in an error:
Polynomial quotient is not exact
-- an error. To debug this try: debugmode(true);
I see your equations have sqrt in them. Try the to_poly_solve package for that. i.e. load(to_poly_solve); and then to_poly_solve([HUGE[1,1] = 0, HUGE[1,2] = 0, HUGE[2,1] = 0, HUGE[2,2] = 0], [tr11, tr12, tr21, tr22]);
When I try that, I get several messy expressions as a result. I didn't check to see if they are correct solutions. Good luck, have fun, let us know how it goes.
I have not used PackedArray before, but just started looking at using them from reading some discussion on them here today.
What I have is lots of large size 1D and 2D matrices of all reals, and no symbolic (it is a finite difference PDE solver), and so I thought that I should take advantage of using PackedArray.
I have an initialization function where I allocate all the data/grids needed. So I went and used ToPackedArray on them. It seems a bit faster, but I need to do more performance testing to better compare speed before and after and also compare RAM usage.
But while I was looking at this, I noticed that some operations in M automatically return lists in PackedArray already, and some do not.
For example, this does not return packed array
a = Table[RandomReal[], {5}, {5}];
Developer`PackedArrayQ[a]
But this does
a = RandomReal[1, {5, 5}];
Developer`PackedArrayQ[a]
and this does
a = Table[0, {5}, {5}];
b = ListConvolve[ {{0, 1, 0}, {1, 4, 1}, {0, 1, 1}}, a, 1];
Developer`PackedArrayQ[b]
and also matrix multiplication does return result in packed array
a = Table[0, {5}, {5}];
b = a.a;
Developer`PackedArrayQ[b]
But element wise multiplication does not
b = a*a;
Developer`PackedArrayQ[b]
My question : Is there a list somewhere which documents which M commands return PackedArray vs. not? (assuming data meets the requirements, such as Real, not mixed, no symbolic, etc..)
Also, a minor question, do you think it will be better to check first if a list/matrix created is already packed before calling calling ToPackedArray on it? I would think calling ToPackedArray on list already packed will not cost anything, as the call will return right away.
thanks,
update (1)
Just wanted to mention, that just found that PackedArray symbols not allowed in a demo CDF as I got an error uploading one with one. So, had to remove all my packing code out. Since I mainly write demos, now this topic is just of an academic interest for me. But wanted to thank everyone for time and good answers.
There isn't a comprehensive list. To point out a few things:
Basic operations with packed arrays will tend to remain packed:
In[66]:= a = RandomReal[1, {5, 5}];
In[67]:= Developer`PackedArrayQ /# {a, a.a, a*a}
Out[67]= {True, True, True}
Note above that that my version (8.0.4) doesn't unpack for element-wise multiplication.
Whether a Table will result in a packed array depends on the number of elements:
In[71]:= Developer`PackedArrayQ[Table[RandomReal[], {24}, {10}]]
Out[71]= False
In[72]:= Developer`PackedArrayQ[Table[RandomReal[], {24}, {11}]]
Out[72]= True
In[73]:= Developer`PackedArrayQ[Table[RandomReal[], {25}, {10}]]
Out[73]= True
On["Packing"] will turn on messages to let you know when things unpack:
In[77]:= On["Packing"]
In[78]:= a = RandomReal[1, 10];
In[79]:= Developer`PackedArrayQ[a]
Out[79]= True
In[80]:= a[[1]] = 0 (* force unpacking due to type mismatch *)
Developer`FromPackedArray::punpack1: Unpacking array with dimensions {10}. >>
Out[80]= 0
Operations that do per-element inspection will usually unpack the array,
In[81]:= a = RandomReal[1, 10];
In[82]:= Position[a, Max[a]]
Developer`FromPackedArray::unpack: Unpacking array in call to Position. >>
Out[82]= {{4}}
There penalty for calling ToPackedArray on an already packed list is small enough that I wouldn't worry about it too much:
In[90]:= a = RandomReal[1, 10^7];
In[91]:= Timing[Do[Identity[a], {10^5}];]
Out[91]= {0.028089, Null}
In[92]:= Timing[Do[Developer`ToPackedArray[a], {10^5}];]
Out[92]= {0.043788, Null}
The frontend prefers packed to unpacked arrays, which can show up when dealing with Dynamic and Manipulate:
In[97]:= Developer`PackedArrayQ[{1}]
Out[97]= False
In[98]:= Dynamic[Developer`PackedArrayQ[{1}]]
Out[98]= True
When looking into performance, focus on cases where large lists are getting unpacked, rather than the small ones. Unless the small ones are in big loops.
This is just an addendum to Brett's answer:
SystemOptions["CompileOptions"]
will give you the lengths being used for which a function will return a packed array. So if you did need to pack a small list, as an alternative to using Developer`ToPackedArray you could temporarily set a smaller number for one of the compile options. e.g.
SetSystemOptions["CompileOptions" -> {"TableCompileLength" -> 20}]
Note also some difference between functions which to me at least doesn't seem intuitive so I generally have to test these kind of things whenever I use them rather than instinctively knowing what will work best:
f = # + 1 &;
g[x_] := x + 1;
data = RandomReal[1, 10^6];
On["Packing"]
Timing[Developer`PackedArrayQ[f /# data]]
{0.131565, True}
Timing[Developer`PackedArrayQ[g /# data]]
Developer`FromPackedArray::punpack1: Unpacking array with dimensions {1000000}.
{1.95083, False}
Another addition to Brett's answer: If a list is a packed array then a ToPackedArray is very fast since this checked quite early. Also you might find this valuable:
http://library.wolfram.com/infocenter/Articles/3141/
In general for numerics stuff look for talks from Rob Knapp and/or Mark Sofroniou.
When I develop numerics codes, I write the function and then use On["Packing"] to make sure that everything is packed that needs to be packed.
Concerning Mike's answer, the threshold has been introduced since for small stuff there is overhead. Where the threshold is is hardware dependent. It might be an idea to write a function that sets these threshold based on measurements done on the computer.
The problem:
I am trying to solve this diffrential equation:
K[x_, x1_] := 1;
NDSolve[{A''[x] == Integrate[K[x, x1] A[x1], {x1, 0, 1}],
A[0] == 0, A'[1] == 1}, A[x], x]
and I'm getting errors (Function::slotn and NDSolve::ndnum)
(it should return a numeric function that is equal to 3/16 x^2 + 5/8 x)
I am looking for a way to solve this differential equation: Is there a way to write it in a better form, such that NDSolve will understand it? Is there another function or package that can help?
Note 1: In my full problem, K[x, x1] is not 1 -- it depends (in a complex way) on x and x1.
Note 2: Naively deriving the two sides of the equation with respect to x won't work, because the integral limits are definite.
My first impression:
It seems that Mathematica doesn't like me referencing a point in A[x] -- the same errors occur when I'm doing this simplified version:
NDSolve[{A''[x] == A[0.5], A[0] == 0, A'[1] == 1}, A[x], x]
(it should return a numeric function that is equal to 2/11 x^2 + 7/11 x)
In this case one can avoid this problem by analytically solving A''[x] == c, and then finding c, but in my first problem it seems to not work -- it only transform the differential equation to an integral one, which (N)DSolve doesn't solve afterwards.
I can suggest a way to reduce your equation to an integral equation, which can be solved numerically by approximating its kernel with a matrix, thereby reducing the integration to matrix multiplication.
First, it is clear that the equation can be integrated twice over x, first from 1 to x, and then from 0 to x, so that:
We can now discretize this equation, putting it on a equidistant grid:
Here, the A[x] becomes a vector, and the integrated kernel iniIntK becomes a matrix, while integration is replaced by a matrix multiplication. The problem is then reduced to a system of linear equations.
The easiest case (that I will consider here) is when the kernel iniIntK can be derived analytically - in this case this method will be quite fast. Here is the function to produce the integrated kernel as a pure function:
Clear[computeDoubleIntK]
computeDoubleIntK[kernelF_] :=
Block[{x, x1},
Function[
Evaluate[
Integrate[
Integrate[kernelF[y, x1], {y, 1, x}] /. x -> y, {y, 0, x}] /.
{x -> #1, x1 -> #2}]]];
In our case:
In[99]:= K[x_,x1_]:=1;
In[100]:= kernel = computeDoubleIntK[K]
Out[100]= -#1+#1^2/2&
Here is the function to produce the kernel matrix and the r.h,s vector:
computeDiscreteKernelMatrixAndRHS[intkernel_, a0_, aprime1_ ,
delta_, interval : {_, _}] :=
Module[{grid, rhs, matrix},
grid = Range[Sequence ## interval, delta];
rhs = a0 + aprime1*grid; (* constant plus a linear term *)
matrix =
IdentityMatrix[Length[grid]] - delta*Outer[intkernel, grid, grid];
{matrix, rhs}]
To give a very rough idea how this may look like (I use here delta = 1/2):
In[101]:= computeDiscreteKernelMatrixAndRHS[kernel,0,1,1/2,{0,1}]
Out[101]= {{{1,0,0},{3/16,19/16,3/16},{1/4,1/4,5/4}},{0,1/2,1}}
We now need to solve the linear equation, and interpolate the result, which is done by the following function:
Clear[computeSolution];
computeSolution[intkernel_, a0_, aprime1_ , delta_, interval : {_, _}] :=
With[{grid = Range[Sequence ## interval, delta]},
Interpolation#Transpose[{
grid,
LinearSolve ##
computeDiscreteKernelMatrixAndRHS[intkernel, a0, aprime1, delta,interval]
}]]
Here I will call it with a delta = 0.1:
In[90]:= solA = computeSolution[kernel,0,1,0.1,{0,1}]
Out[90]= InterpolatingFunction[{{0.,1.}},<>]
We now plot the result vs. the exact analytical solution found by #Sasha, as well as the error:
I intentionally chose delta large enough so the errors are visible. If you chose delta say 0.01, the plots will be visually identical. Of course, the price of taking smaller delta is the need to produce and solve larger matrices.
For kernels that can be obtained analytically, the main bottleneck will be in the LinearSolve, but in practice it is pretty fast (for matrices not too large). When kernels can not be integrated analytically, the main bottleneck will be in computing the kernel in many points (matrix creation. The matrix inverse has a larger asymptotic complexity, but this will start play a role for really large matrices - which are not necessary in this approach, since it can be combined with an iterative one - see below). You will typically define:
intK[x_?NumericQ, x1_?NumericQ] := NIntegrate[K[y, x1], {y, 1, x}]
intIntK[x_?NumericQ, x1_?NumericQ] := NIntegrate[intK[z, x1], {z, 0, x}]
As a way to speed it up in such cases, you can precompute the kernel intK on a grid and then interpolate, and the same for intIntK. This will however introduce additional errors, which you'll have to estimate (account for).
The grid itself needs not be equidistant (I just used it for simplicity), but may (and probably should) be adaptive, and generally non-uniform.
As a final illustration, consider an equation with a non-trivial but symbolically integrable kernel:
In[146]:= sinkern = computeDoubleIntK[50*Sin[Pi/2*(#1-#2)]&]
Out[146]= (100 (2 Sin[1/2 \[Pi] (-#1+#2)]+Sin[(\[Pi] #2)/2]
(-2+\[Pi] #1)))/\[Pi]^2&
In[157]:= solSin = computeSolution[sinkern,0,1,0.01,{0,1}]
Out[157]= InterpolatingFunction[{{0.,1.}},<>]
Here are some checks:
In[163]:= Chop[{solSin[0],solSin'[1]}]
Out[163]= {0,1.}
In[153]:=
diff[x_?NumericQ]:=
solSin''[x] - NIntegrate[50*Sin[Pi/2*(#1-#2)]&[x,x1]*solSin[x1],{x1,0,1}];
In[162]:= diff/#Range[0,1,0.1]
Out[162]= {-0.0675775,-0.0654974,-0.0632056,-0.0593575,-0.0540479,-0.0474074,
-0.0395995,-0.0308166,-0.0212749,-0.0112093,0.000369261}
To conclude, I just want to stress that one has to perform a careful error - estimation analysis for this method, which I did not do.
EDIT
You can also use this method to get the initial approximate solution, and then iteratively improve it using FixedPoint or other means - in this way you will have a relatively fast convergence and will be able to reach the required precision without the need to construct and solve huge matrices.
This is complementary to Leonid Shifrin's approach. We start with a linear function that interpolates the value and first derivative at the starting point. We use that in the integration with the given kernel function. We can then iterate, using each previous approximation in the integrated kernel that is used to make the next approximation.
I show an example below, using a more complicated kernel than just a constant function. I'll take it through two iterations and show tables of discrepancies.
kernel[x_, y_] := Sqrt[x]/(y^2 + 1/5)*Sin[x^2 + y]
intkern[x_?NumericQ, aa_] :=
NIntegrate[kernel[x, y]*aa[y], {y, 0, 1}, MinRecursion -> 2,
AccuracyGoal -> 3]
Clear[a];
a0 = 0;
a1 = 1;
a[0][x_] := a0 + a1*x
soln1 = a[1][x] /.
First[NDSolve[{(a[1]^\[Prime]\[Prime])[x] == intkern[x, a[0], y],
a[1][0] == a0, a[1][1] == a1}, a[1][x], {x, 0, 1}]];
a[1][x_] = soln1;
In[283]:= Table[a[1]''[x] - intkern[x, a[1]], {x, 0., 1, .1}]
Out[283]= {4.336808689942018*10^-19, 0.01145100326794241, \
0.01721655945379122, 0.02313249302884235, 0.02990900241909161, \
0.03778448183557359, 0.04676409320217928, 0.05657128568058478, \
0.06665818935524814, 0.07624149919589895, 0.08412643746245929}
In[285]:=
soln2 = a[2][x] /.
First[NDSolve[{(a[2]^\[Prime]\[Prime])[x] == intkern[x, a[1]],
a[2][0] == a0, a[2][1] == a1}, a[2][x], {x, 0, 1}]];
a[2][x_] = soln2;
In[287]:= Table[a[2]''[x] - intkern[x, a[2]], {x, 0., 1, .1}]
Out[287]= {-2.168404344971009*10^-19, -0.001009606971360516, \
-0.00152476679745811, -0.002045817184941901, -0.002645356229312557, \
-0.003343218015068372, -0.004121109614310836, -0.004977453722712966, \
-0.005846840469889258, -0.006731367269472544, -0.007404971586975062}
So we have errors of less than .01 at this stage. Not too bad. One drawback is that it was fairly slow to get the second approximation. There may be ways to tune NDSolve to improve on that.
This is complementary to Leonid's method for two reasons.
(1) If this did not converge well because the initial linear approximation was not sufficiently close to the true result, one might instead begin with an approximation found by a finite differencing scheme. That would be akin to what he did.
(2) He pretty much indicated this himself, as a method that might follow his and produce refinements.
Daniel Lichtblau
The way your equation is currently written A''[x] == const, and than constant is independent of x. Hence the solution always has the form of quadratic polynomial. Your problem then reduces to a solving for indeterminate coefficients:
In[13]:= A[x_] := a2 x^2 + a1 x + a0;
In[14]:= K[x_, x1_] := 1;
In[16]:= Solve[{A''[x] == Integrate[K[x, x1] A[x1], {x1, 0, 1}],
A[0] == 0, A'[1] == 1}, {a2, a1, a0}]
Out[16]= {{a2 -> 3/16, a1 -> 5/8, a0 -> 0}}
In[17]:= A[x] /. First[%]
Out[17]= (5 x)/8 + (3 x^2)/16