Related
I have written code which draws the Sierpinski fractal. It is really slow since it uses recursion. Do any of you know how I could write the same code without recursion in order for it to be quicker? Here is my code:
midpoint[p1_, p2_] := Mean[{p1, p2}]
trianglesurface[A_, B_, C_] := Graphics[Polygon[{A, B, C}]]
sierpinski[A_, B_, C_, 0] := trianglesurface[A, B, C]
sierpinski[A_, B_, C_, n_Integer] :=
Show[
sierpinski[A, midpoint[A, B], midpoint[C, A], n - 1],
sierpinski[B, midpoint[A, B], midpoint[B, C], n - 1],
sierpinski[C, midpoint[C, A], midpoint[C, B], n - 1]
]
edit:
I have written it with the Chaos Game approach in case someone is interested. Thank you for your great answers!
Here is the code:
random[A_, B_, C_] := Module[{a, result},
a = RandomInteger[2];
Which[a == 0, result = A,
a == 1, result = B,
a == 2, result = C]]
Chaos[A_List, B_List, C_List, S_List, n_Integer] :=
Module[{list},
list = NestList[Mean[{random[A, B, C], #}] &,
Mean[{random[A, B, C], S}], n];
ListPlot[list, Axes -> False, PlotStyle -> PointSize[0.001]]]
This uses Scale and Translate in combination with Nest to create the list of triangles.
Manipulate[
Graphics[{Nest[
Translate[Scale[#, 1/2, {0, 0}], pts/2] &, {Polygon[pts]}, depth]},
PlotRange -> {{0, 1}, {0, 1}}, PlotRangePadding -> .2],
{{pts, {{0, 0}, {1, 0}, {1/2, 1}}}, Locator},
{{depth, 4}, Range[7]}]
If you would like a high-quality approximation of the Sierpinski triangle, you can use an approach called the chaos game. The idea is as follows - pick three points that you wish to define as the vertices of the Sierpinski triangle and choose one of those points randomly. Then, repeat the following procedure as long as you'd like:
Choose a random vertex of the trangle.
Move from the current point to the halfway point between its current location and that vertex of the triangle.
Plot a pixel at that point.
As you can see at this animation, this procedure will eventually trace out a high-resolution version of the triangle. If you'd like, you can multithread it to have multiple processes plotting pixels at once, which will end up drawing the triangle more quickly.
Alternatively, if you just want to translate your recursive code into iterative code, one option would be to use a worklist approach. Maintain a stack (or queue) that contains a collection of records, each of which holds the vertices of the triangle and the number n. Initially put into this worklist the vertices of the main triangle and the fractal depth. Then:
While the worklist is not empty:
Remove the first element from the worklist.
If its n value is not zero:
Draw the triangle connecting the midpoints of the triangle.
For each subtriangle, add that triangle with n-value n - 1 to the worklist.
This essentially simulates the recursion iteratively.
Hope this helps!
You may try
l = {{{{0, 1}, {1, 0}, {0, 0}}, 8}};
g = {};
While [l != {},
k = l[[1, 1]];
n = l[[1, 2]];
l = Rest[l];
If[n != 0,
AppendTo[g, k];
(AppendTo[l, {{#1, Mean[{#1, #2}], Mean[{#1, #3}]}, n - 1}] & ## #) & /#
NestList[RotateLeft, k, 2]
]]
Show#Graphics[{EdgeForm[Thin], Pink,Polygon#g}]
And then replace the AppendTo by something more efficient. See for example https://mathematica.stackexchange.com/questions/845/internalbag-inside-compile
Edit
Faster:
f[1] = {{{0, 1}, {1, 0}, {0, 0}}, 8};
i = 1;
g = {};
While[i != 0,
k = f[i][[1]];
n = f[i][[2]];
i--;
If[n != 0,
g = Join[g, k];
{f[i + 1], f[i + 2], f[i + 3]} =
({{#1, Mean[{#1, #2}], Mean[{#1, #3}]}, n - 1} & ## #) & /#
NestList[RotateLeft, k, 2];
i = i + 3
]]
Show#Graphics[{EdgeForm[Thin], Pink, Polygon#g}]
Since the triangle-based functions have already been well covered, here is a raster based approach.
This iteratively constructs pascal's triangle, then takes modulo 2 and plots the result.
NestList[{0, ##} + {##, 0} & ## # &, {1}, 511] ~Mod~ 2 // ArrayPlot
Clear["`*"];
sierpinski[{a_, b_, c_}] :=
With[{ab = (a + b)/2, bc = (b + c)/2, ca = (a + c)/2},
{{a, ab, ca}, {ab, b, bc}, {ca, bc, c}}];
pts = {{0, 0}, {1, 0}, {1/2, Sqrt[3]/2}} // N;
n = 5;
d = Nest[Join ## sierpinski /# # &, {pts}, n]; // AbsoluteTiming
Graphics[{EdgeForm#Black, Polygon#d}]
(*sierpinski=Map[Mean, Tuples[#,2]~Partition~3 ,{2}]&;*)
Here is a 3D version,https://mathematica.stackexchange.com/questions/22256/how-can-i-compile-this-function
ListPlot#NestList[(# + RandomChoice[{{0, 0}, {2, 0}, {1, 2}}])/2 &,
N#{0, 0}, 10^4]
With[{data =
NestList[(# + RandomChoice#{{0, 0}, {1, 0}, {.5, .8}})/2 &,
N#{0, 0}, 10^4]},
Graphics[Point[data,
VertexColors -> ({1, #[[1]], #[[2]]} & /# Rescale#data)]]
]
With[{v = {{0, 0, 0.6}, {-0.3, -0.5, -0.2}, {-0.3, 0.5, -0.2}, {0.6,
0, -0.2}}},
ListPointPlot3D[
NestList[(# + RandomChoice[v])/2 &, N#{0, 0, 0}, 10^4],
BoxRatios -> 1, ColorFunction -> "Pastel"]
]
I need to create a 3 by 3 real orthonormal symbolic matrix in Mathematica.
How can I do so?
Not that I recommend this, but...
m = Array[a, {3, 3}];
{q, r} = QRDecomposition[m];
q2 = Simplify[q /. Conjugate -> Identity]
So q2 is a symbolic orthogonal matrix (assuming we work over reals).
You seem to want some SO(3) group parametrization in Mathematica I think. You will only have 3 independent symbols (variables), since you have 6 constraints from mutual orthogonality of vectors and the norms equal to 1. One way is to construct independent rotations around the 3 axes, and multiply those matrices. Here is the (perhaps too complex) code to do that:
makeOrthogonalMatrix[p_Symbol, q_Symbol, t_Symbol] :=
Module[{permute, matrixGeneratingFunctions},
permute = Function[perm, Permute[Transpose[Permute[#, perm]], perm] &];
matrixGeneratingFunctions =
Function /# FoldList[
permute[#2][#1] &,
{{Cos[#], 0, Sin[#]}, {0, 1, 0}, {-Sin[#], 0, Cos[#]}},
{{2, 1, 3}, {3, 2, 1}}];
#1.#2.#3 & ## MapThread[Compose, {matrixGeneratingFunctions, {p, q, t}}]];
Here is how this works:
In[62]:= makeOrthogonalMatrix[x,y,z]
Out[62]=
{{Cos[x] Cos[z]+Sin[x] Sin[y] Sin[z],Cos[z] Sin[x] Sin[y]-Cos[x] Sin[z],Cos[y] Sin[x]},
{Cos[y] Sin[z],Cos[y] Cos[z],-Sin[y]},
{-Cos[z] Sin[x]+Cos[x] Sin[y] Sin[z],Cos[x] Cos[z] Sin[y]+Sin[x] Sin[z],Cos[x] Cos[y]}}
You can check that the matrix is orthonormal, by using Simplify over the various column (or row) dot products.
I have found a "direct" way to impose special orthogonality.
See below.
(*DEFINITION OF ORTHOGONALITY AND SELF ADJUNCTNESS CONDITIONS:*)
MinorMatrix[m_List?MatrixQ] := Map[Reverse, Minors[m], {0, 1}]
CofactorMatrix[m_List?MatrixQ] := MapIndexed[#1 (-1)^(Plus ## #2) &, MinorMatrix[m], {2}]
UpperTriangle[ m_List?MatrixQ] := {m[[1, 1 ;; 3]], {0, m[[2, 2]], m[[2, 3]]}, {0, 0, m[[3, 3]]}};
FlatUpperTriangle[m_List?MatrixQ] := Flatten[{m[[1, 1 ;; 3]], m[[2, 2 ;; 3]], m[[3, 3]]}];
Orthogonalityconditions[m_List?MatrixQ] := Thread[FlatUpperTriangle[m.Transpose[m]] == FlatUpperTriangle[IdentityMatrix[3]]];
Selfadjunctconditions[m_List?MatrixQ] := Thread[FlatUpperTriangle[CofactorMatrix[m]] == FlatUpperTriangle[Transpose[m]]];
SO3conditions[m_List?MatrixQ] := Flatten[{Selfadjunctconditions[m], Orthogonalityconditions[m]}];
(*Building of an SO(3) matrix*)
mat = Table[Subscript[m, i, j], {i, 3}, {j, 3}];
$Assumptions = SO3conditions[mat]
Then
Simplify[Det[mat]]
gives 1;...and
MatrixForm[Simplify[mat.Transpose[mat]]
gives the identity matrix;
...finally
MatrixForm[Simplify[CofactorMatrix[mat] - Transpose[mat]]]
gives a Zero matrix.
========================================================================
This is what I was looking for when I asked my question!
However, let me know your thought on this method.
Marcellus
Marcellus, you have to use some parametrization of SO(3), since your general matrix has to reflect the RP3 topology of the group. No single parametrization will cover the whole group without either multivaluedness or singular points. Wikipedia has a nice page about the various charts on SO(3).
Maybe one of the conceptually simplest is the exponential map from the Lie algebra so(3).
Define an antisymmetric, real A (which spans so(3))
A = {{0, a, -c},
{-a, 0, b},
{c, -b, 0}};
Then MatrixExp[A] is an element of SO(3).
We can check that this is so, using
Transpose[MatrixExp[A]].MatrixExp[A] == IdentityMatrix[3] // Simplify
If we write t^2 = a^2 + b^2 + c^2, we can simplify the matrix exponential down to
{{ b^2 + (a^2 + c^2) Cos[t] , b c (1 - Cos[t]) + a t Sin[t], a b (1 - Cos[t]) - c t Sin[t]},
{b c (1 - Cos[t]) - a t Sin[t], c^2 + (a^2 + b^2) Cos[t] , a c (1 - Cos[t]) + b t Sin[t]},
{a b (1 - Cos[t]) + c t Sin[t], a c (1 - Cos[t]) - b t Sin[t], a^2 + (b^2 + c^2) Cos[t]}} / t^2
Note that this is basically the same parametrization as RotationMatrix gives.
Compare with the output from
RotationMatrix[s, {b, c, a}] // ComplexExpand // Simplify[#, Trig -> False] &;
% /. a^2 + b^2 + c^2 -> 1
Although I really like the idea of Marcellus' answer to his own question, it's not completely correct. Unfortunately, the conditions he arrives at also result in
Simplify[Transpose[mat] - mat]
evaluating to a zero matrix! This is clearly not right. Here's an approach that's both correct and more direct:
OrthogonalityConditions[m_List?MatrixQ] := Thread[Flatten[m.Transpose[m]] == Flatten[IdentityMatrix[3]]];
SO3Conditions[m_List?MatrixQ] := Flatten[{OrthogonalityConditions[m], Det[m] == 1}];
i.e. multiplying a rotation matrix by its transpose results in the identity matrix, and the determinant of a rotation matrix is 1.
While looking at the belisarius's question about generation of non-singular integer matrices with uniform distribution of its elements, I was studying a paper by Dana Randal, "Efficient generation of random non-singular matrices". The algorithm proposed is recursive, and involves generating a matrix of lower dimension and assigning it to a given minor. I used combinations of Insert and Transpose to do it, but there are must be more efficient ways of doing it. How would you do it?
The following is the code:
Clear[Gen];
Gen[p_, 1] := {{{1}}, RandomInteger[{1, p - 1}, {1, 1}]};
Gen[p_, n_] := Module[{v, r, aa, tt, afr, am, tm},
While[True,
v = RandomInteger[{0, p - 1}, n];
r = LengthWhile[v, # == 0 &] + 1;
If[r <= n, Break[]]
];
afr = UnitVector[n, r];
{am, tm} = Gen[p, n - 1];
{Insert[
Transpose[
Insert[Transpose[am], RandomInteger[{0, p - 1}, n - 1], r]], afr,
1], Insert[
Transpose[Insert[Transpose[tm], ConstantArray[0, n - 1], r]], v,
r]}
]
NonSingularRandomMatrix[p_?PrimeQ, n_] := Mod[Dot ## Gen[p, n], p]
It does generate a non-singular matrix, and has uniform distribution of matrix elements, but requires p to be prime:
The code is also not every efficient, which is, I suspect due to my inefficient matrix constructors:
In[10]:= Timing[NonSingularRandomMatrix[101, 300];]
Out[10]= {0.421, Null}
EDIT So let me condense my question. The minor matrix of a given matrix m can be computed as follows:
MinorMatrix[m_?MatrixQ, {i_, j_}] :=
Drop[Transpose[Drop[Transpose[m], {j}]], {i}]
It is the original matrix with i-th row and j-th column deleted.
I now need to create a matrix of size n by n that will have the given minor matrix mm at position {i,j}. What I used in the algorithm was:
ExpandMinor[minmat_, {i_, j_}, v1_,
v2_] /; {Length[v1] - 1, Length[v2]} == Dimensions[minmat] :=
Insert[Transpose[Insert[Transpose[minmat], v2, j]], v1, i]
Example:
In[31]:= ExpandMinor[
IdentityMatrix[4], {2, 3}, {1, 2, 3, 4, 5}, {2, 3, 4, 4}]
Out[31]= {{1, 0, 2, 0, 0}, {1, 2, 3, 4, 5}, {0, 1, 3, 0, 0}, {0, 0, 4,
1, 0}, {0, 0, 4, 0, 1}}
I am hoping this can be done more efficiently, which is what I am soliciting in the question.
Per blisarius's suggestion I looked into implementing ExpandMinor via ArrayFlatten.
Clear[ExpandMinorAlt];
ExpandMinorAlt[m_, {i_ /; i > 1, j_}, v1_,
v2_] /; {Length[v1] - 1, Length[v2]} == Dimensions[m] :=
ArrayFlatten[{
{Part[m, ;; i - 1, ;; j - 1], Transpose#{v2[[;; i - 1]]},
Part[m, ;; i - 1, j ;;]},
{{v1[[;; j - 1]]}, {{v1[[j]]}}, {v1[[j + 1 ;;]]}},
{Part[m, i ;;, ;; j - 1], Transpose#{v2[[i ;;]]}, Part[m, i ;;, j ;;]}
}]
ExpandMinorAlt[m_, {1, j_}, v1_,
v2_] /; {Length[v1] - 1, Length[v2]} == Dimensions[m] :=
ArrayFlatten[{
{{v1[[;; j - 1]]}, {{v1[[j]]}}, {v1[[j + 1 ;;]]}},
{Part[m, All, ;; j - 1], Transpose#{v2}, Part[m, All, j ;;]}
}]
In[192]:= dim = 5;
mm = RandomInteger[{-5, 5}, {dim, dim}];
v1 = RandomInteger[{-5, 5}, dim + 1];
v2 = RandomInteger[{-5, 5}, dim];
In[196]:=
Table[ExpandMinor[mm, {i, j}, v1, v2] ==
ExpandMinorAlt[mm, {i, j}, v1, v2], {i, dim}, {j, dim}] //
Flatten // DeleteDuplicates
Out[196]= {True}
It took me a while to get here, but since I spent a good part of my postdoc generating random matrices, I could not help it, so here goes. The main inefficiency in the code comes from the necessity to move matrices around (copy them). If we could reformulate the algorithm so that we only modify a single matrix in place, we could win big. For this, we must compute the positions where the inserted vectors/rows will end up, given that we will typically insert in the middle of smaller matrices and thus shift the elements. This is possible. Here is the code:
gen = Compile[{{p, _Integer}, {n, _Integer}},
Module[{vmat = Table[0, {n}, {n}],
rs = Table[0, {n}],(* A vector of r-s*)
amatr = Table[0, {n}, {n}],
tmatr = Table[0, {n}, {n}],
i = 1,
v = Table[0, {n}],
r = n + 1,
rsc = Table[0, {n}], (* recomputed r-s *)
matstarts = Table[0, {n}], (* Horizontal positions of submatrix starts at a given step *)
remainingShifts = Table[0, {n}]
(*
** shifts that will be performed after a given row/vector insertion,
** and can affect the real positions where the elements will end up
*)
},
(*
** Compute the r-s and vectors v all at once. Pad smaller
** vectors v with zeros to fill a rectangular matrix
*)
For[i = 1, i <= n, i++,
While[True,
v = RandomInteger[{0, p - 1}, i];
For[r = 1, r <= i && v[[r]] == 0, r++];
If[r <= i,
vmat[[i]] = PadRight[v, n];
rs[[i]] = r;
Break[]]
]];
(*
** We must recompute the actual r-s, since the elements will
** move due to subsequent column insertions.
** The code below repeatedly adds shifts to the
** r-s on the left, resulting from insertions on the right.
** For example, if vector of r-s
** is {1,2,1,3}, it will become {1,2,1,3}->{2,3,1,3}->{2,4,1,3},
** and the end result shows where
** in the actual matrix the columns (and also rows for the case of
** tmatr) will be inserted
*)
rsc = rs;
For[i = 2, i <= n, i++,
remainingShifts = Take[rsc, i - 1];
For[r = 1, r <= i - 1, r++,
If[remainingShifts[[r]] == rsc[[i]],
Break[]
]
];
If[ r <= n,
rsc[[;; i - 1]] += UnitStep[rsc[[;; i - 1]] - rsc[[i]]]
]
];
(*
** Compute the starting left positions of sub-
** matrices at each step (1x1,2x2,etc)
*)
matstarts = FoldList[Min, First#rsc, Rest#rsc];
(* Initialize matrices - this replaces the recursion base *)
amatr[[n, rsc[[1]]]] = 1;
tmatr[[rsc[[1]], rsc[[1]]]] = RandomInteger[{1, p - 1}];
(* Repeatedly perform insertions - this replaces recursion *)
For[i = 2, i <= n, i++,
amatr[[n - i + 2 ;; n, rsc[[i]]]] = RandomInteger[{0, p - 1}, i - 1];
amatr[[n - i + 1, rsc[[i]]]] = 1;
tmatr[[n - i + 2 ;; n, rsc[[i]]]] = Table[0, {i - 1}];
tmatr[[rsc[[i]],
Fold[# + 1 - Unitize[# - #2] &,
matstarts[[i]] + Range[0, i - 1], Sort[Drop[rsc, i]]]]] =
vmat[[i, 1 ;; i]];
];
{amatr, tmatr}
],
{{FoldList[__], _Integer, 1}}, CompilationTarget -> "C"];
NonSignularRanomMatrix[p_?PrimeQ, n_] := Mod[Dot ## Gen[p, n],p];
NonSignularRanomMatrixAlt[p_?PrimeQ, n_] := Mod[Dot ## gen[p, n],p];
Here is the timing for the large matrix:
In[1114]:= gen [101, 300]; // Timing
Out[1114]= {0.078, Null}
For the histogram, I get the identical plots, and the 10-fold efficiency boost:
In[1118]:=
Histogram[Table[NonSignularRanomMatrix[11, 5][[2, 3]], {10^4}]]; // Timing
Out[1118]= {7.75, Null}
In[1119]:=
Histogram[Table[NonSignularRanomMatrixAlt[11, 5][[2, 3]], {10^4}]]; // Timing
Out[1119]= {0.687, Null}
I expect that upon careful profiling of the above compiled code, one could further improve the performance. Also, I did not use runtime Listable attribute in Compile, while this should be possible. It may also be that the parts of the code which perform assignment to minors are generic enough so that the logic can be factored out of the main function - I did not investigate that yet.
For the first part of your question (which I hope I understand properly) can
MinorMatrix be written as follows?
MinorMatrixAlt[m_?MatrixQ, {i_, j_}] := Drop[mat, {i}, {j}]
I am wondering if anyone could please help to draw a triangular grid (equilateral) with edge length n in mathematica. Thanks.
A Simple Grid:
p = Table[ Table[
Polygon[{j - 1/2 i, i Sqrt[3]/2} + # & /# {{0, 0}, {1/2,Sqrt[3]/2}, {1, 0}}],
{j, i, 9}], {i, 0, 9}];
Graphics[{EdgeForm[Black], FaceForm[White], p}]
Edit
A more clear version, I guess:
s3 = Sqrt[3];
templateTriangleVertex = {{0, 0}, {1, s3}, {2, 0}};
p = Table[Table[
Polygon[{2 j - i, s3 i } + # & /# templateTriangleVertex],
{j, i, 9}], {i, 0, 9}];
Graphics[{EdgeForm[Black], FaceForm[White], p}]
Something like this?
(source: yaroslavvb.com)
This is the code I used. Perhaps too complicated for the specific task above, it's part of code I had to visualize integer lattices like this
A = Sqrt[2/3] {Cos[#], Sin[#], Sqrt[1/2]} & /#
Table[Pi/2 + 2 Pi/3 + 2 k Pi/3, {k, 0, 2}] // Transpose;
p2r[{x_, y_, z_}] := Most[A.{x, y, z}];
n = 10;
types = 1/n Permutations /# IntegerPartitions[n, {3}, Range[1, n]] //
Flatten[#, 1] &;
points = p2r /# types;
Needs["ComputationalGeometry`"]
Graphics[{EdgeForm[Black], FaceForm[Transparent],
GraphicsComplex[points,
Polygon /# DelaunayTriangulation[points // N][[All, 2]]]}]
What this does
types contains all 3 tuples of integers that add up to n. Those integers lie on a 2-dimensional subspace of R^3
A is a linear transformation to rotate those 3-tuples into x-y plane
Delauney triangulation finds all triangles connecting nearby points
Here is a variation on belisarius' method.
p = Table[{2 j - i, Sqrt[3] i}, {i, 0, 9}, {j, i, 9}]
Graphics[ Line # Join[p, Riffle ### Partition[p, 2, 1]] ]
I want to "modify" Mathematica's Interpolation[] function (in 1
dimension) by replacing extrapolation with constant values when the
input is out of range.
In other words, if the interpolation domain is [1,20] and f[1]==7 and
f[20]==12, I want:
f[x] = 7 for x<=1
f[x] = 12 for x>=20
f[x] = Interpolation[...]
However, this fails:
(* interpolation w cutoff *)
interpcut[r_] := Module[{s, minpair, maxpair},
(* sort array by x coord *)
s = Sort[r, #1[[1]] < #2[[1]] &];
(* find min x value and corresponding y value *)
minpair = s[[1]];
(* ditto for max x value *)
maxpair = s[[-1]];
(* return the pure function representing cutoff interpolation *)
Piecewise[{
{minpair[[2]] &, #1 < minpair[[1]] &},
{maxpair[[2]] &, #1 > maxpair[[1]] &},
{Interpolation[r], True}
}]]
test = Table[{x,Prime[x]},{x,1,10}]
InputForm[interpcut[test]]
Piecewise[{{minpair$59[[2]] & , #1 < minpair$59[[1]] & },
{maxpair$59[[2]] & , #1 > maxpair$59[[1]] & }},
InterpolatingFunction[{{1, 10}}, {3, 1, 0, {10}, {4}, 0, 0, 0, 0},
{{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}}, {{2}, {3}, {5}, {7}, {11}, {13}, {17},
{19}, {23}, {29}}, {Automatic}]]
I'm sure I'm missing something basic. What?
Function definition
interpcut[r_, x_] :=
Module[{s},(*sort array by x coord*)
s = SortBy[r, First];
Piecewise[
{{First[s][[2]], x < First[s][[1]]},
{Last [s][[2]], x > Last [s][[1]]},
{Interpolation[r][x], True}}]];
Test
test = Table[{x, Prime[x]}, {x, 1, 10}];
f[x_] := interpcut[test, x]
Plot[f[x], {x, -10, 30}]
Edit
Answering your comment about pure functions.
I did it that way just for clarity, not for cheating. For using pure functions just "follow the recipe":
interpcut[r_] := Module[{s},
s = SortBy[r, First];
Function[Piecewise[
{{First[s][[2]], # < First[s][[1]]},
{Last [s][[2]], # > Last [s][[1]]},
{Interpolation[r][#], True}}]]
]
test = Table[{x, Prime[x]}, {x, 1, 10}];
f = interpcut[test] // InputForm
Plot[interpcut[test][x], {x, -10, 30}]
Let me add an update to this old thread. Since V9 you can use native (but still experimental) "ExtrapolationHandler" parameter
test = Table[{x, Prime[x]}, {x, 1, 10}];
g = Interpolation[test, "ExtrapolationHandler" ->
{If[# <= test[[1, 1]], test[[1, 2]], test[[-1, 2]]] &,
"WarningMessage" -> False}];
Plot[g[x], {x, -10, 30}]
Here's a possible alternative to belisarius's answer:
interpcut[r_] := Module[{s}, s = SortBy[r, First];
Composition[Interpolation[r], Clip[#, Map[First, Through[{First, Last}[s]]]] &]]