Complexity of backtracking algorithm - algorithm

I tried to solve this problem using backtracking but I am not sure about the complexity of the algorithm (and if the algorithm is correct) and what would be an algorithm with a better complexity.
Given 2 positive integers n and m, we call legal a sequence of integers if:
the length of the sequence is n
the elements in the sequence are between 1 and m
the element in position i of the sequence, 1 < i <= n, is a divisor of the element in position i-1
Count the number of legal sequences. Expected complexity of the algorithm is O(m² + nm)
This is my algorithm in c:
// n length of the sequence
// m maximum valid number
// l number of remaining positions in the sequence
// p previous number in the sequence
int legal(int n, int m, int l, int p) {
if (l == 0)
return 1;
int q=0;
for (int i=1; i <= m;i++) {
if (p%i == 0 || l == n)
q += legal(n,m,l-1,i);
}
return q;
}
int main() {
int n, m;
scanf("%d", &n);
scanf("%d", &m);
printf("%d\n", legal(n,m,n,0));
}
I think the complexity of my algorithm is O(nmS(n)) with S(n) = the number of legal sequences

You are correct that your program runs in the solution space of problem. For this type of problem, your solution is sub-optimal for large input (say n = m = 100). That is because the solution space grows exponentially in relation to m and n. Here is a solution that uses memoization to avoid re-computations:
#include <cstdio>
#define LIMIT 101
#define DIRTY -1
long long cache[LIMIT][LIMIT];
void clear_cache() {
for (int i = 0; i < LIMIT; i++) {
for (int j = 0; j < LIMIT; j++) {
// marked all entries in cache as dirty
cache[i][j] = DIRTY;
}
}
}
long long legal_seqs(int curr_len, int prev_num, int seq_len, int max_num) {
// base case
if (curr_len == seq_len) return 1;
// if we haven't seen this sub-problem, compute it!
// this is called memoization
if (cache[curr_len][prev_num] == DIRTY) {
long long ways = 0;
// get all multiples of prev_num
for (int next_num = 1; next_num <= max_num; next_num++) {
if (prev_num % next_num == 0) {
ways += legal_seqs(curr_len + 1, next_num, seq_len, max_num);
}
}
cache[curr_len][prev_num] = ways;
}
return cache[curr_len][prev_num];
}
int main() {
int n, m;
scanf("%d%d", &n, &m);
clear_cache();
printf("%lld\n", legal_seqs(0, 0, n, m));
}
The code above runs in the time complexity you mentioned.

Related

Minimum Coin Change Top Down DP with 1D array

This is coin change problem from Leetcode where you have infinite coins for given denominations and you have to find minimum coins required to meet the given sum.
I tried solving this problem using 1D cache array with top-down approach. Basic test cases were passed but it failed for some larger values of the sum and denominations. My assumption is that I am doing something wrong while traversing the array, might be missing some calculations for subproblems, but not able to find the issue to fix it.
Problem Statement:
You are given an integer array coins representing coins of different denominations and an integer amount representing a total amount of money.
Return the fewest number of coins that you need to make up that amount. If that amount of money cannot be made up by any combination of the coins, return -1.
You may assume that you have an infinite number of each kind of coin.
Input: coins = [1,2,5], amount = 11
Output: 3
Explanation: 11 = 5 + 5 + 1
My Solution:
/* Test case, it's failing for:
Input: coins: [186,419,83,408]
sum = 6249
Output: 26
Expected: 20
*/
------------------------------------------------------------------------
int fncUtil(int dp[], int a[], int sum, int n, int curCoins) {
if(sum == 0) {
return curCoins;
}
if(n < 0 || sum < 1) {
return Integer.MAX_VALUE;
}
if(dp[sum] != Integer.MAX_VALUE) {
return dp[sum];
}
dp[sum] = Math.min(fncUtil(dp, a, sum - a[n], n, curCoins+1),
fncUtil(dp, a, sum, n-1, curCoins));
return dp[sum];
}
public int coinChange(int[] a, int sum) {
Arrays.sort(a);
int n = a.length;
// minCoins = Integer.MAX_VALUE;
int dp[] = new int[sum+1];
for(int i = 0; i <= sum; i++) {
dp[i] = Integer.MAX_VALUE;
}
dp[0] = 0;
int minCoins = fncUtil(dp, a, sum, n-1, 0);
if(minCoins == Integer.MAX_VALUE) return -1;
return minCoins;
}
Seems you don't update dp array in the case of existing value
if(dp[sum] != Integer.MAX_VALUE) {
return dp[sum];
}
Perhaps you need to choose best from three variants
dp[sum] = Math.min(dp[sum],
Math.min(fncUtil(dp, a, sum - a[n], n, curCoins+1), fncUtil(dp, a, sum, n-1, curCoins)));
But we can solve this problem without recursion using bottom-up order (not checked)
public int coinChange(int[] a, int sum) {
int n = a.length;
int dp[] = new int[sum+1];
for(int i = 0; i <= sum; i++) {
dp[i] = Integer.MAX_VALUE - 1;
}
dp[0] = 0;
for(int i = 0; i < n; i++) {
for (int k = a[i]; k <= sum; k++) {
dp[k] = Math.min(dp[k], dp[k-a[i]] + 1);
}
}
return dp[sum];
}

Is every recursive algorithm a divide and conquer algorithm?

I have a problem for homework and I need to solve this problem with a divide and conquer algorithm.
I solved this algorithm by using recursion. Did I use divide and conquer automatically by using recursion?
For example, is this below approach a divide an conquer algorithm? Because I use fun function in fun.(recursive call)
Code:
#include <stdio.h>
/* int a[] = {-6,60,-10,20}; */
int a[] = {-2, -3, 4, -1, -2, 1, 5, -3};
int len = sizeof(a)/sizeof(*a);
int maxherearray[10];
int fun(int n);
int max(int a, int b);
int find_max(int a[], int len);
void print_array(int a[], int start_idx, int end_idx);
int start_idx = 0; // Start of contiguous subarray giving max sum
int end_idx = 0; // End of contiguous subarray giving max sum
#define NEG_INF (-100000)
int max_sum = NEG_INF; // The max cont sum seen so far.
int main(void)
{
start_idx = 0;
end_idx = len - 1;
maxherearray[0] = a[0];
printf("Array a[]: ");
print_array(a, 0, len-1);
printf("\n");
// Compute the necessary information to get max contiguous subarray
fun(len - 1);
printf("Max subarray value == %d\n", find_max(maxherearray, len));
printf("\n");
printf("Contiguous sums: ");
print_array(maxherearray, 0, len - 1);
printf("\n");
printf("Contiguous subarray giving max sum: ");
print_array(a, start_idx, end_idx);
printf("\n\n");
return 0;
}
int fun(int n)
{
if(n==0)
return a[0];
int max_till_j = fun(n - 1);
// Start of new contiguous sum
if (a[n] > a[n] + max_till_j)
{
maxherearray[n] = a[n];
if (maxherearray[n] > max_sum)
{
start_idx = end_idx = n;
max_sum = maxherearray[n];
}
}
// Add to current contiguous sum
else
{
maxherearray[n] = a[n] + max_till_j;
if (maxherearray[n] > max_sum)
{
end_idx = n;
max_sum = maxherearray[n];
}
}
return maxherearray[n];
}
int max(int a, int b)
{
return (a > b)? a : b;
}
// Print subarray a[i] to a[j], inclusive of end points.
void print_array(int a[], int i, int j)
{
for (; i <= j; ++i) {
printf("%d ", a[i]);
}
}
int find_max(int a[], int len)
{
int i;
int max_val = NEG_INF;
for (i = 0; i < len; ++i)
{
if (a[i] > max_val)
{
max_val = a[i];
}
}
return max_val;
}
Every recursion function is not necessarily divide-and-conquer approach. There are other approaches like decrease-and-conquer(decrease by a constant factor, decrease by one, variable-size decrease).
Is this below approach a divide an conquer algorithm?
Your function is exactly decrease by a constant factor which is 1 approach. You can glance at here.
Pseudocode for the divide-and-conquer algorithm for
finding a maximum-subarray
MaxSubarray(A,low,high)
//
if high == low
return (low, high, A[low]) // base case: only one element
else
// divide and conquer
mid = floor( (low + high)/2 )
(leftlow,lefthigh,leftsum) = MaxSubarray(A,low,mid)
(rightlow,righthigh,rightsum) = MaxSubarray(A,mid+1,high)
(xlow,xhigh,xsum) = MaxXingSubarray(A,low,mid,high)
// combine
if leftsum >= rightsum and leftsum >= xsum
return (leftlow,lefthigh,leftsum)
else if rightsum >= leftsum and rightsum >= xsum
return (rightlow,righthigh,rightsum)
else
return (xlow,xhigh,xsum)
end if
end if
--------------------------------------------------------------
MaxXingSubarray(A,low,mid,high)
// Find a max-subarray of A[i..mid]
leftsum = -infty
sum = 0
for i = mid downto low
sum = sum + A[i]
if sum > leftsum
leftsum = sum
maxleft = i
end if
end for
// Find a max-subarray of A[mid+1..j]
rightsum = -infty
sum = 0
for j = mid+1 to high
sum = sum + A[j]
if sum > rightsum
rightsum = sum
maxright = j
end if
end for
// Return the indices i and j and the sum of the two subarrays
return (maxleft,maxright,leftsum + rightsum)
-----------------------------------------------------------
=== Remarks:
(1) Initial call: MaxSubarray(A,1,n)
(2) Divide by computing mid.
Conquer by the two recursive alls to MaxSubarray.
Combine by calling MaxXingSubarray and then determining
which of the three results gives the maximum sum.
(3) Base case is when the subarray has only 1 element.
Not necessarily. If you explore the functional programming paradigm you will learn that the simple for loop can be replaced with recursion
for i in range(x):
body(i)
changes to
def do_loop(x, _start=0):
if _start < x:
body(_start)
do_loop(x, _start=_start+1)
and it's quite obvious that not every iteration is a divide and conquer algorithm.

Find a subsequence of length k whose sum is equal to given sum

Given an array A and a sum, I want to find out if there exists a subsequence of length K such that the sum of all elements in the subsequence equals the given sum.
Code:
for i in(1,N):
for len in (i-1,0):
for sum in (0,Sum of all element)
Possible[len+1][sum] |= Possible[len][sum-A[i]]
Time complexity O(N^2.Sum). Is there any way to improve the time complexity to O(N.Sum)
My function shifts a window of k adjacent array items across the array A and keeps the sum up-to-data until it matches of the search fails.
int getSubSequenceStart(int A[], size_t len, int sum, size_t k)
{
int sumK = 0;
assert(len > 0);
assert(k <= len);
// compute sum for first k items
for (int i = 0; i < k; i++)
{
sumK += A[i];
}
// shift k-window upto end of A
for (int j = k; j < len; j++)
{
if (sumK == sum)
{
return j - k;
}
sumK += A[j] - A[j - k];
}
return -1;
}
Complexity is linear with the length of array A.
Update for the non-contiguous general subarray case:
To find a possibly non-contiguous subarray, you could transform your problem into a subset sum problem by subtracting sum/k from every element of A and looking for a subset with sum zero. The complexity of the subset sum problem is known to be exponential. Therefore, you cannot hope for a linear algorithm, unless your array A has special properties.
Edit:
This could actually be solved without the queue in linear time (negative numbers allowed).
C# code:
bool SubsequenceExists(int[] a, int k, int sum)
{
int currentSum = 0;
if (a.Length < k) return false;
for (int i = 0; i < a.Length; i++)
{
if (i < k)
{
currentSum += a[i];
continue;
}
if (currentSum == sum) return true;
currentSum += a[i] - a[i-k];
}
return false;
}
Original answer:
Assuming you can use a queue of length K something like that should do the job in linear time.
C# code:
bool SubsequenceExists(int[] a, int k, int sum)
{
int currentSum = 0;
var queue = new Queue<int>();
for (int i = 0; i < a.Length; i++)
{
if (i < k)
{
queue.Enqueue(a[i]);
currentSum += a[i];
continue;
}
if (currentSum == sum) return true;
currentSum -= queue.Dequeue();
queue.Enqueue(a[i]);
currentSum += a[i];
}
return false;
}
The logic behind that is pretty much straightforward:
We populate a queue with first K elements while also storing its sum somewhere.
If the resulting sum is not equal to sum then we dequeue an element from the queue and add the next one from A (while updating the sum).
We repeat step 2 until we either reach the end of sequence or find the matching subsequence.
Ta-daa!
Let is_subset_sum(int set[], int n, int sum) be the function to find whether there is a subset of set[] with sum equal to sum. n is the number of elements in set[].
The is_subset_sum problem can be divided into two subproblems
Include the last element, recur for n = n-1, sum = sum – set[n-1]
Exclude the last element, recur for n = n-1.
If any of the above subproblems return true, then return true.
Following is the recursive formula for is_subset_sum() problem.
is_subset_sum(set, n, sum) = is_subset_sum(set, n-1, sum) || is_subset_sum(set, n-1, sum-set[n-1])
Base Cases:
is_subset_sum(set, n, sum) = false, if sum > 0 and n == 0
is_subset_sum(set, n, sum) = true, if sum == 0
We can solve the problem in Pseudo-polynomial time using Dynamic programming. We create a boolean 2D table subset[][] and fill it in a bottom-up manner. The value of subset[i][j] will be true if there is a subset of set[0..j-1] with sum equal to i., otherwise false. Finally, we return subset[sum][n]
The time complexity of the solution is O(sum*n).
Implementation in C
// A Dynamic Programming solution for subset sum problem
#include <stdio.h>
// Returns true if there is a subset of set[] with sun equal to given sum
bool is_subset_sum(int set[], int n, int sum) {
// The value of subset[i][j] will be true if there is a
// subset of set[0..j-1] with sum equal to i
bool subset[sum+1][n+1];
// If sum is 0, then answer is true
for (int i = 0; i <= n; i++)
subset[0][i] = true;
// If sum is not 0 and set is empty, then answer is false
for (int i = 1; i <= sum; i++)
subset[i][0] = false;
// Fill the subset table in botton up manner
for (int i = 1; i <= sum; i++) {
for (int j = 1; j <= n; j++) {
subset[i][j] = subset[i][j-1];
if (i >= set[j-1])
subset[i][j] = subset[i][j] || subset[i - set[j-1]][j-1];
}
}
/* // uncomment this code to print table
for (int i = 0; i <= sum; i++) {
for (int j = 0; j <= n; j++)
printf ("%4d", subset[i][j]);
printf("\n");
} */
return subset[sum][n];
}
// Driver program to test above function
int main() {
int set[] = {3, 34, 4, 12, 5, 2};
int sum = 9;
int n = sizeof(set)/sizeof(set[0]);
if (is_subset_sum(set, n, sum) == true)
printf("Found a subset with given sum");
else
printf("No subset with given sum");
return 0;
}

Counting tilings of a rectangle

I am trying to solve this problem but I can't find a solution:
A board consisting of squares arranged into N rows and M columns is given. A tiling of this board is a pattern of tiles that covers it. A tiling is interesting if:
only tiles of size 1x1 and/or 2x2 are used;
each tile of size 1x1 covers exactly one whole square;
each tile of size 2x2 covers exactly four whole squares;
each square of the board is covered by exactly one tile.
For example, the following images show a few interesting tilings of a board of size 4 rows and 3 columns:
http://dabi.altervista.org/images/task.img.4x3_tilings_example.gif
Two interesting tilings of a board are different if there exists at least one square on the board that is covered with a tile of size 1x1 in one tiling and with a tile of size 2x2 in the other. For example, all tilings shown in the images above are different.
Write a function
int count_tilings(int N, int M);
that, given two integers N and M, returns the remainder modulo 10,000,007 of the number of different interesting tilings of a board of size N rows and M columns.
Assume that:
N is an integer within the range [1..1,000,000];
M is an integer within the range [1..7].
For example, given N = 4 and M = 3, the function should return 11, because there are 11 different interesting tilings of a board of size 4 rows and 3 columns:
http://dabi.altervista.org/images/task.img.4x3_tilings_all.gif
for (4,3) the result is 11, for (6,5) the result is 1213.
I tried the following but it doesn't work:
static public int count_tilings ( int N,int M ) {
int result=1;
if ((N==1)||(M==1)) return 1;
result=result+(N-1)*(M-1);
int max_tiling= (int) ((int)(Math.ceil(N/2))*(Math.ceil(M/2)));
System.out.println(max_tiling);
for (int i=2; i<=(max_tiling);i++){
if (N>=2*i){
int n=i+(N-i);
int k=i;
//System.out.println("M-1->"+(M-1) +"i->"+i);
System.out.println("(M-1)^i)->"+(Math.pow((M-1),i)));
System.out.println( "n="+n+ " k="+k);
System.out.println(combinations(n, k));
if (N-i*2>0){
result+= Math.pow((M-1),i)*combinations(n, k);
}else{
result+= Math.pow((M-1),i);
}
}
if (M>=2*i){
int n=i+(M-i);
int k=i;
System.out.println("(N-1)^i)->"+(Math.pow((N-1),i)));
System.out.println( "n="+n+ " k="+k);
System.out.println(combinations(n, k));
if (M-i*2>0){
result+= Math.pow((N-1),i)*combinations(n, k);
}else{
result+= Math.pow((N-1),i);
}
}
}
return result;
}
static long combinations(int n, int k) {
/*binomial coefficient*/
long coeff = 1;
for (int i = n - k + 1; i <= n; i++) {
coeff *= i;
}
for (int i = 1; i <= k; i++) {
coeff /= i;
}
return coeff;
}
Since this is homework I won't give a full solution, but I'll give you some hints.
First here's a recursive solution:
class Program
{
// Important note:
// The value of masks given here is hard-coded for m == 5.
// In a complete solution, you need to calculate the masks for the
// actual value of m given. See explanation in answer for more details.
int[] masks = { 0, 3, 6, 12, 15, 24, 27, 30 };
int CountTilings(int n, int m, int s = 0)
{
if (n == 1) { return 1; }
int result = 0;
foreach (int mask in masks)
{
if ((mask & s) == 0)
{
result += CountTilings(n - 1, m, mask);
}
}
return result;
}
public static void Main()
{
Program p = new Program();
int result = p.CountTilings(6, 5);
Console.WriteLine(result);
}
}
See it working online: ideone
Note that I've added an extra parameter s. This stores the contents of the first column. If the first column is empty, s = 0. If the first column contains some filled squares the corresponding bits in s are set. Initially s = 0, but when a 2 x 2 tile is placed, this fills up some squares in the next column, and that will mean that s will be non-zero in the recursive call.
The masks variable is hard-coded but in a complete solution it needs to be calculated based on the actual value of m. The values stored in masks make more sense if you look at their binary representations:
00000
00011
00110
01100
01111
11000
11011
11110
In other words, it's all the ways of setting pairs of bits in a binary number with m bits. You can write some code to generate all these possiblities. Or since there are only 7 possible values of m, you could also just hard-code all seven possibilities for masks.
There are however two serious problems with the recursive solution.
It will overflow the stack for large values of N.
It requires exponential time to calculate. It is incredibly slow even for small values of N
Both these problems can be solved by rewriting the algorithm to be iterative. Keep m constant and initalize the result for n = 1 for all possible values of s to be 1. This is because if you only have one column you must use only 1x1 tiles, and there is only one way to do this.
Now you can calculate n = 2 for all possible values of s by using the results from n = 1. This can be repeated until you reach n = N. This algorithm completes in linear time with respect to N, and requires constant space.
Here is a recursive solution:
// time used : 27 min
#include <set>
#include <vector>
#include <iostream>
using namespace std;
void placement(int n, set< vector <int> > & p){
for (int i = 0; i < n -1 ; i ++){
for (set<vector<int> > :: iterator j = p.begin(); j != p.end(); j ++){
vector <int> temp = *j;
if (temp[i] == 1 || temp[i+1] == 1) continue;
temp[i] = 1; temp[i+1] = 1;
p.insert(temp);
}
}
}
vector<vector<int> > placement( int n){
if (n > 7) throw "error";
set <vector <int> > p;
vector <int> temp (n,0);
p.insert (temp);
for (int i = 0; i < 3; i ++) placement(n, p);
vector <vector <int> > s;
s.assign (p.begin(), p.end());
return s;
}
bool tryput(vector <vector <int> > &board, int current, vector<int> & comb){
for (int i = 0; i < comb.size(); i ++){
if ((board[current][i] == 1 || board[current+1][i]) && comb[i] == 1) return false;
}
return true;
}
void put(vector <vector <int> > &board, int current, vector<int> & comb){
for (int i = 0; i < comb.size(); i ++){
if (comb[i] == 1){
board[current][i] = 1;
board[current+1][i] = 1;
}
}
return;
}
void undo(vector <vector <int> > &board, int current, vector<int> & comb){
for (int i = 0; i < comb.size(); i ++){
if (comb[i] == 1){
board[current][i] = 0;
board[current+1][i] = 0;
}
}
return;
}
int place (vector <vector <int> > &board, int current, vector < vector <int> > & all_comb){
int m = board.size();
if (current >= m) throw "error";
if (current == m - 1) return 1;
int count = 0;
for (int i = 0; i < all_comb.size(); i ++){
if (tryput(board, current, all_comb[i])){
put(board, current, all_comb[i]);
count += place(board, current+1, all_comb) % 10000007;
undo(board, current, all_comb[i]);
}
}
return count;
}
int place (int m, int n){
if (m == 0) return 0;
if (m == 1) return 1;
vector < vector <int> > all_comb = placement(n);
vector <vector <int> > board(m, vector<int>(n, 0));
return place (board, 0, all_comb);
}
int main(){
cout << place(3, 4) << endl;
return 0;
}
time complexity O(n^3 * exp(m))
to reduce the space usage try bit vector.
to reduce the time complexity to O(m*(n^3)), try dynamic programming.
to reduce the time complexity to O(log(m) * n^3) try divide and conquer + dynamic programming.
good luck

A data structure problem

Given a sequence of integers, there are a number of queries.
Each query has a range [l, r], and you are to find the median of the given range [l, r]
The number of queries can be as large as 100,000
The length of the sequence can be as large as 100,000
I wonder if there is any data structure can support such query
My solution:
I consult my partner today and he tells to use partition tree.
We can build a partition tree in nlog(n) time and answer each query in log(n) time
The partition tree actually is the process of merge sort, but for each node in the tree, it saves the number of integers that go to the left subtree. Thus, we can use this information to deal with the query.
here is my code:
This program is to find the x in a given interval [l, r], that minimize the following equation.
alt text http://acm.tju.edu.cn/toj/3556_01.jpg
Explanation:
seq saves the sequence
pos saves the position after sort
ind saves the index
cntL saves the number of integers that go to the left tree in a given range
#include <cstdio>
#include <cstring>
#include <algorithm>
using namespace std;
#define N 100008
typedef long long LL;
int n, m, seq[N], ind[N], pos[N], next[N];
int cntL[20][N];
LL sum[20][N], sumL, subSum[N];
void build(int l, int r, int head, int dep)
{
if (l == r)
{
cntL[dep][l] = cntL[dep][l-1];
sum[dep][l] = sum[dep][l-1];
return ;
}
int mid = (l+r)>>1;
int hl = 0, hr = 0, tl = 0, tr = 0;
for (int i = head, j = l; i != -1; i = next[i], j++)
{
cntL[dep][j] = cntL[dep][j-1];
sum[dep][j] = sum[dep][j-1];
if (pos[i] <= mid)
{
next[tl] = i;
tl = i;
if (hl == 0) hl = i;
cntL[dep][j]++;
sum[dep][j] += seq[i];
}
else
{
next[tr] = i;
tr = i;
if (hr == 0) hr = i;
}
}
next[tl] = -1;
next[tr] = -1;
build(l, mid, hl, dep+1);
build(mid+1, r, hr, dep+1);
}
int query(int left, int right, int ql, int qr, int kth, int dep)
{
if (left == right)
{
return ind[left];
}
int mid = (left+right)>>1;
if (cntL[dep][qr] - cntL[dep][ql-1] >= kth)
{
return query(left, mid, left+cntL[dep][ql-1]-cntL[dep][left-1], left+cntL[dep][qr]-cntL[dep][left-1]-1, kth, dep+1);
}
else
{
sumL += sum[dep][qr]-sum[dep][ql-1];
return query(mid+1, right, mid+1+ql-left-(cntL[dep][ql-1]-cntL[dep][left-1]), mid+qr+1-left-(cntL[dep][qr]-cntL[dep][left-1]), \
kth-(cntL[dep][qr]-cntL[dep][ql-1]), dep+1);
}
}
inline int cmp(int x, int y)
{
return seq[x] < seq[y];
}
int main()
{
int ca, t, i, j, middle, ql, qr, id, tot;
LL ans;
scanf("%d", &ca);
for (t = 1; t <= ca; t++)
{
scanf("%d", &n);
subSum[0] = 0;
for (i = 1; i <= n; i++)
{
scanf("%d", seq+i);
ind[i] = i;
subSum[i] = subSum[i-1]+seq[i];
}
sort(ind+1, ind+1+n, cmp);
for (i = 1; i <= n; i++)
{
pos[ind[i]] = i;
next[i] = i+1;
}
next[n] = -1;
build(1, n, 1, 0);
printf("Case #%d:\n", t);
scanf("%d", &m);
while (m--)
{
scanf("%d%d", &ql, &qr);
ql++, qr++;
middle = (qr-ql+2)/2;
sumL= 0;
id = query(1, n, ql, qr, middle, 0);
ans = subSum[qr]-subSum[ql-1]-sumL;
tot = qr-ql+1;
ans = ans-(tot-middle+1)*1ll*seq[id]+(middle-1)*1ll*seq[id]-sumL;
printf("%lld\n", ans);
}
puts("");
}
}
This is called the Range Median Query problem. The following paper might be relevant: Towards Optimal Range Medians. (Free link, thanks to belisarius).
From the abstract of the paper:
We consider the following problem:
Given an unsorted array of n elements,
and a sequence of intervals in the
array, compute the median in each of
the subarrays defined by the
intervals. We describe a simple
algorithm which needs O(nlogk+klogn)
time to answer k such median queries.
This improves previous algorithms by a
logarithmic factor and matches a
comparison lower bound for k=O(n). The
space complexity of our simple
algorithm is O(nlogn) in the pointer
machine model, and O(n) in the RAM
model. In the latter model, a more
involved O(n) space data structure can
be constructed in O(nlogn) time where
the time per query is reduced to
O(logn/loglogn). We also give
efficient dynamic variants of both
data structures, achieving O(log^2n)
query time using O(nlogn) space in the
comparison model and
O((logn/loglogn)^2) query time using
O(nlogn/loglogn) space in the RAM
model, and show that in the cell-probe
model, any data structure which
supports updates in O(log^O(1)n) time
must have Ω(logn/loglogn) query time.
Our approach naturally generalizes to
higher-dimensional range median
problems, where element positions and
query ranges are multidimensional—it
reduces a range median query to a
logarithmic number of range counting
queries.
Of course, you could preprocess the whole array in O(n^3) time (or perhaps even O(n^2logn) time) and O(n^2) space to be able to return the median in O(1) time.
Additional constraints might help simplify the solution. For instance, do we know that r-l will lesser than a known constant? etc...

Resources