Implement a random-number generator using only getpid() and gettimeofday()? - gcc

I am using gcc compiler to Implement a random-number generator using only getpid() and gettimeofday(). Here is my code
#include <stdio.h>
#include <sys/time.h>
#include <sys/time.h>
#include <time.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
struct timeval tv;
int count;
int i;
int INPUT_MAX =10;
int NO_OF_SAMPLES =10;
gettimeofday(&tv, NULL);
printf("Enter Max: \n");
scanf("%d", &INPUT_MAX);
printf("Enter No. of samples needed: \n");
scanf("%d", &NO_OF_SAMPLES);
/*printf("%ld\n",tv.tv_usec);
printf("PID :%d\n", getpid());*/
for (count = 0; count< NO_OF_SAMPLES; count++) {
printf("%ld\n", (getpid() * tv.tv_usec) % INPUT_MAX + 1);
for (i = 0; i < 1000000; ++i)
{
/* code */
}
}
return 0;
}
I gave a inner for loop for delay purpose but the result what i am getting is always same no. like this
./a.out
Enter Max:
10
Enter No. of samples needed:
10
1
1
1
1
1
1
1
1
1
1
Plz correct me what am i doing wrong?

getpid() is constant during the programs execution, so you get constant values, too.
But even if you use gettimeofday() inside the loop, this likely won't help:
gcc will likely optimize away your delay loop.
even it it's not optimized away, the delays will be very similar and your values won't be very random.
I'd suggest you look up "linear congruential generator", for a simple way to generate more random numbers.

Put gettimeofday in the loop. Look if getpid() is divisible by INPUT_MAX + 1 you will get the same answer always. Instead you can add getpid() (not make any sense though()) to tv.tv_usec.

Related

how to find the minimum number of ascending subsequence

I meet a problem. http://poj.org/problem?id=1065
The problem is to find the minimum number of ascending subsequences.
I see somebody is to find the length of longest descending subsequences.
I don't know why the two numbers are equal.
#include <iostream>
#include <algorithm>
#include <functional>
#include <memory.h>
/* run this program using the console pauser or add your own getch,
system("pause") or input loop */
using namespace std;
pair<int,int> stick[5000];
int dp[5000];
int main(int argc, char** argv) {
int t;
cin>>t;
while(t--){
int n;
cin>>n;
for(int i=0;i<n;i++){
cin>>stick[i].first>>stick[i].second;
}
sort(stick,stick+n);
memset(dp,-1,sizeof(int)*5000);
for(int i=0;i<n;i++){
*lower_bound(dp,dp+n,stick[i].second,greater<int>())=stick[i].second;
}
cout<<lower_bound(dp, dp + n, -1, greater<int>()) - dp<<endl;
}
return 0;
}
It follows immediately from the Dilworth's theorem. It's a standard technique for solving problems like this.

Inverting an image using MPI

I am trying to invert a PGM image using MPI. The grayscale (PGM) image should be loaded on the root processor and then be sent to each of the s^2 processors. Each processor will invert a block of the given image, and the inverted blocks will be gathered back on the root processor, which will assemble the blocks into the final image and write it to a PGM image. I ran the following code, but did not get any output. The image was read after running the code, but there was no indication of writing the resultant image. Could you please let me know what could be wrong with it?
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <time.h>
#include <string.h>
#include <math.h>
#include <memory.h>
#define max(x, y) ((x>y) ? (x):(y))
#define min(x, y) ((x<y) ? (x):(y))
int xdim;
int ydim;
int maxraw;
unsigned char *image;
void ReadPGM(FILE*);
void WritePGM(FILE*);
#define s 2
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
int p, rank;
MPI_Comm_size(MPI_COMM_WORLD, &p);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
const int NPROWS=s; /* number of rows in _decomposition_ */
const int NPCOLS=s; /* number of cols in _decomposition_ */
const int BLOCKROWS = xdim/NPROWS; /* number of rows in _block_ */
const int BLOCKCOLS = ydim/NPCOLS; /* number of cols in _block_ */
int i, j;
FILE *fp;
float BLimage[BLOCKROWS*BLOCKCOLS];
for (int ii=0; ii<BLOCKROWS*BLOCKCOLS; ii++)
BLimage[ii] = 0;
float BLfilteredMat[BLOCKROWS*BLOCKCOLS];
for (int ii=0; ii<BLOCKROWS*BLOCKCOLS; ii++)
BLfilteredMat[ii] = 0;
if (rank == 0) {
/* begin reading PGM.... */
ReadPGM(fp);
}
MPI_Datatype blocktype;
MPI_Datatype blocktype2;
MPI_Type_vector(BLOCKROWS, BLOCKCOLS, ydim, MPI_FLOAT, &blocktype2);
MPI_Type_create_resized( blocktype2, 0, sizeof(float), &blocktype);
MPI_Type_commit(&blocktype);
int disps[NPROWS*NPCOLS];
int counts[NPROWS*NPCOLS];
for (int ii=0; ii<NPROWS; ii++) {
for (int jj=0; jj<NPCOLS; jj++) {
disps[ii*NPCOLS+jj] = ii*ydim*BLOCKROWS+jj*BLOCKCOLS;
counts [ii*NPCOLS+jj] = 1;
}
}
MPI_Scatterv(image, counts, disps, blocktype, BLimage, BLOCKROWS*BLOCKCOLS, MPI_FLOAT, 0, MPI_COMM_WORLD);
//************** Invert the block **************//
for (int proc=0; proc<p; proc++) {
if (proc == rank) {
for (int j = 0; j < BLOCKCOLS; j++) {
for (int i = 0; i < BLOCKROWS; i++) {
BLfilteredMat[j*BLOCKROWS+i] = 255 - image[j*BLOCKROWS+i];
}
}
} // close if (proc == rank) {
MPI_Barrier(MPI_COMM_WORLD);
} // close for (int proc=0; proc<p; proc++) {
MPI_Gatherv(BLfilteredMat, BLOCKROWS*BLOCKCOLS,MPI_FLOAT, image, counts, disps,blocktype, 0, MPI_COMM_WORLD);
if (rank == 0) {
/* Begin writing PGM.... */
WritePGM(fp);
free(image);
}
MPI_Finalize();
return (1);
}
It is very likely MPI is not the right tool for the job. The reason for this is that your job is inherently bandwidth limited.
Think of it this way: You have a coloring book with images which you all want to color in.
Method 1: you take your time and color them in one by one.
Method 2: you copy each page to a new sheet of paper and mail it to a friend who then colors it in for you. He mails it back to you and in the end you glue all the pages you received from all of your friends together to make one colored-in book.
Note that method two involves copying the whole book, which is arguably the same amount of work needed to color in the whole book. So method two is less time-efficient without even considering the overhead of shoving the pages into an envelope, licking the stamp, going to the post office and waiting for the letter to be delivered.
If you look at your code, every transmitted byte is only touched once throughout the whole program in this line:
BLfilteredMat[j*BLOCKROWS+i] = 255 - image[j*BLOCKROWS+i];
The single processor is much faster at subtracting two integers than it is at sending an integer of the wire, therefore one must advise against using MPI for your particular problem.
My suggestion to solve your problem: Try to avoid unneccessary communication whenever possible. Do all processes have access to the file system on which the files are located? You could try reading them directly from the filesystem.

How to use copy_to_user

I'm trying to add a custom system call into the linux kernel. Here is a simple code:
#include <linux/mysyscall.h>
#include <linux/kernel.h>
#include <asm/uaccess.h>
#include <asm/system.h>
asmlinkage int sys_mysyscall(int *data){
int a = 3;
cli();
copy_to_user(data, &a, 1);
sti();
printk(KERN_EMERG "Called with %d\n", a);
return a;
}
I can compile a kernel with mysyscall added and when I try to access it with a user program like:
#include <linux/mysyscall.h>
int main(void){
int *data;
int r;
int a = 0;
data = &a;
r = mysyscall(data);
printf("r is %d and data is %d", r, *data);
}
*data does not equal to 3 it equals to 0.
How should I use copy_to_user to fix it?
The copy to user line of code copies only one byte from 'a'. In case of little endian systems it is going to be 0. Copy all the 4 bytes to get the correct result.

Detect current CPU Clock Speed Programmatically on OS X?

I just bought a nifty MBA 13" Core i7. I'm told the CPU speed varies automatically, and pretty wildly, too. I'd really like to be able to monitor this with a simple app.
Are there any Cocoa or C calls to find the current clock speed, without actually affecting it?
Edit: I'm OK with answers using Terminal calls, as well as programmatic.
Thanks!
Try this tool called "Intel Power Gadget". It displays IA frequency and IA power in real time.
http://software.intel.com/sites/default/files/article/184535/intel-power-gadget-2.zip
You can query the CPU speed easily via sysctl, either by command line:
sysctl hw.cpufrequency
Or via C:
#include <stdio.h>
#include <sys/types.h>
#include <sys/sysctl.h>
int main() {
int mib[2];
unsigned int freq;
size_t len;
mib[0] = CTL_HW;
mib[1] = HW_CPU_FREQ;
len = sizeof(freq);
sysctl(mib, 2, &freq, &len, NULL, 0);
printf("%u\n", freq);
return 0;
}
Since it's an Intel processor, you could always use RDTSC. That's an assembler instruction that returns the current cycle counter — a 64bit counter that increments every cycle. It'd be a little approximate but e.g.
#include <stdio.h>
#include <stdint.h>
#include <unistd.h>
uint64_t rdtsc(void)
{
uint32_t ret0[2];
__asm__ __volatile__("rdtsc" : "=a"(ret0[0]), "=d"(ret0[1]));
return ((uint64_t)ret0[1] << 32) | ret0[0];
}
int main(int argc, const char * argv[])
{
uint64_t startCount = rdtsc();
sleep(1);
uint64_t endCount = rdtsc();
printf("Clocks per second: %llu", endCount - startCount);
return 0;
}
Output 'Clocks per second: 2002120630' on my 2Ghz MacBook Pro.
There is a kernel extension written by "flAked" which logs the cpu p-state to the kernel log.
http://www.insanelymac.com/forum/index.php?showtopic=258612
maybe you could contact him for the code.
This seems to work correctly on OSX.
However, it doesn't work on Linux, where sysctl is deprecated and KERN_CLOCKRATE is undefined.
#include <sys/sysctl.h>
#include <sys/time.h>
int mib[2];
size_t len;
mib[0] = CTL_KERN;
mib[1] = KERN_CLOCKRATE;
struct clockinfo clockinfo;
len = sizeof(clockinfo);
int result = sysctl(mib, 2, &clockinfo, &len, NULL, 0);
assert(result != -1);
log_trace("clockinfo.hz: %d\n", clockinfo.hz);
log_trace("clockinfo.tick: %d\n", clockinfo.tick);

GSL Uniform Random Number Generator

I want to use GSL's uniform random number generator. On their website, they include this sample code:
#include <stdio.h>
#include <gsl/gsl_rng.h>
int
main (void)
{
const gsl_rng_type * T;
gsl_rng * r;
int i, n = 10;
gsl_rng_env_setup();
T = gsl_rng_default;
r = gsl_rng_alloc (T);
for (i = 0; i < n; i++)
{
double u = gsl_rng_uniform (r);
printf ("%.5f\n", u);
}
gsl_rng_free (r);
return 0;
}
However, this does not rely on any seed and so, the same random numbers will be produced each time.
They also specify the following:
The generator itself can be changed using the environment variable GSL_RNG_TYPE. Here is the output of the program using a seed value of 123 and the multiple-recursive generator mrg,
$ GSL_RNG_SEED=123 GSL_RNG_TYPE=mrg ./a.out
But I don't understand how to implement this. Any ideas as to what modifications I can make to the above code to incorporate the seed?
The problem is that a new seed is not being generated. If you just want a function that returns a darn random number, and care nothing about the sticky details of how it's generated, try this. Assumes that you have the GSL installed.
#include <iostream>
#include <gsl/gsl_math.h>
#include <gsl/gsl_rng.h>
#include <sys/time.h>
float keithRandom() {
// Random number function based on the GNU Scientific Library
// Returns a random float between 0 and 1, exclusive; e.g., (0,1)
const gsl_rng_type * T;
gsl_rng * r;
gsl_rng_env_setup();
struct timeval tv; // Seed generation based on time
gettimeofday(&tv,0);
unsigned long mySeed = tv.tv_sec + tv.tv_usec;
T = gsl_rng_default; // Generator setup
r = gsl_rng_alloc (T);
gsl_rng_set(r, mySeed);
double u = gsl_rng_uniform(r); // Generate it!
gsl_rng_free (r);
return (float)u;
}
Read 18.6 Random number environment variables to see what that gsl_rng_env_setup() function is doing. It is getting a generator type and seed from environment variables.
Then see 18.3 Random number generator initialization - if you don't want to get the seed from an environment variable, you can use gsl_rng_set() to set the seed.
A complete answer to this question with a sample code can be seen in in this link.
Just for completeness I am putting a copy of the code for a function to create a seed here. It is written by Robert G. Brown: http://www.phy.duke.edu/~rgb/ .
#include <stdio.h>
#include <sys/time.h>
unsigned long int random_seed()
{
unsigned int seed;
struct timeval tv;
FILE *devrandom;
if ((devrandom = fopen("/dev/random","r")) == NULL) {
gettimeofday(&tv,0);
seed = tv.tv_sec + tv.tv_usec;
} else {
fread(&seed,sizeof(seed),1,devrandom);
fclose(devrandom);
}
return(seed);
}
But from my own experience with this function, I would say that the dev/random solution is very time consuming compared to the gettimeofday(), you can check it out. So, the gettimeofday() solution, might be better for you if its level of accuracy is enough:
#include <stdio.h>
#include <sys/time.h>
unsigned long int random_seed()
{
struct timeval tv;
gettimeofday(&tv,0);
return (tv.tv_sec + tv.tv_usec);
}

Resources