Synchronization at kernel level during insmod - linux-kernel

I am a kernel newbie, I have been working out with the semaphores lately, which help in synchronizing among the kernel threads, I know its not a good way of synchronization but please bear with me If I am missing something here.
I have two modules, In which I try to access a common function by exporting the symbol to the kernel symbol table from the first module and this will be accessed by the second module during the init_module() calls.
The first module as give below has the exported symbol "caller",
#define DEBUG
#include<linux/init.h>
#include<linux/module.h>
struct semaphore sem =__SEMAPHORE_INITIALIZER(sem,1);
EXPORT_SYMBOL(sem);
void static caller(int a, int b,int count){
complete(&sig);
while(1){
int temp=20000,i;
printk(KERN_INFO"called from module %s\n",THIS_MODULE->name);
// down_interruptible(&sem);
while(--count>0){
for(i=0;i<temp;i++);
printk(KERN_INFO"This is a Exported Symbol into the table %d %d",a+b,count);
}
// up(&sem);
break;
}
}
static int __init hello_kernel(void){
int count=1000000;
init_completion(&sig);
printk(KERN_ALERT"Entered In the First Module");
// printk(KERN_INFO" Parameter passed are %d %s",arg,chrarg);
caller(1,1,count);
return 0;
}
static void __exit hello_exit(void){
dump_stack();
printk(KERN_EMERG"GOHEAVEN");
}
module_init(hello_kernel);
module_exit(hello_exit);
MODULE_LICENSE("GPL");
EXPORT_SYMBOL(caller);
module_init(hello_kernel);
module_exit(hello_exit);
The second module calls the exported symbol "caller"
#define DEBUG
#include<linux/init.h>
#include<linux/module.h>
void caller(int , int, int);
static int __init hello_kernel(void){
int count=50;
printk(KERN_INFO"Entered In Second Module");
caller(2,2,count);
return 0;
}
static void __exit hello_exit(void){
dump_stack();
printk(KERN_EMERG"GOHEAVEN");
}
module_init(hello_kernel);
module_exit(hello_exit);
MODULE_LICENSE("GPL");
EXPORT_SYMBOL(caller);
module_init(hello_kernel);
module_exit(hello_exit);
First module log comes first and then the Second modules init is called.
.........
.........
[36056.684970] This is a Exported Symbol into the table 2 36
[36056.685136] This is a Exported Symbol into the table 2 35
[36056.685301] This is a Exported Symbol into the table 2 34
[36056.685467] This is a Exported Symbol into the table 2 33
[36056.685632] This is a Exported Symbol into the table 2 32
[36056.685798] This is a Exported Symbol into the table 2 31
[36056.685963] This is a Exported Symbol into the table 2 30
[36056.686129] This is a Exported Symbol into the table 2 29
[36056.686314] This is a Exported Symbol into the table 2 28
[36056.686488] This is a Exported Symbol into the table 2 27
[36056.686664] This is a Exported Symbol into the table 2 26
[36056.686851] This is a Exported Symbol into the table 2 25
[36056.687032] This is a Exported Symbol into the table 2 24
[36056.687228] This is a Exported Symbol into the table 2 23
[36056.687393] This is a Exported Symbol into the table 2 22
[36056.687559] This is a Exported Symbol into the table 2 21
[36056.687724] This is a Exported Symbol into the table 2 20
[36056.687890] This is a Exported Symbol into the table 2 19
[36056.688055] This is a Exported Symbol into the table 2 18
[36056.688221] This is a Exported Symbol into the table 2 17
[36056.688387] This is a Exported Symbol into the table 2 16
[36056.688552] This is a Exported Symbol into the table 2 15
[36056.688718] This is a Exported Symbol into the table 2 14
[36056.688883] This is a Exported Symbol into the table 2 13
[36056.689049] This is a Exported Symbol into the table 2 12
[36056.689215] This is a Exported Symbol into the table 2 11
[36056.689380] This is a Exported Symbol into the table 2 10
[36056.689546] This is a Exported Symbol into the table 2 9
[36056.689711] This is a Exported Symbol into the table 2 8
[36056.689877] This is a Exported Symbol into the table 2 7
[36056.690044] This is a Exported Symbol into the table 2 6
[36056.690238] This is a Exported Symbol into the table 2 5
[36056.690432] This is a Exported Symbol into the table 2 4
[36056.690604] This is a Exported Symbol into the table 2 3
[36056.690786] This is a Exported Symbol into the table 2 2
[36056.690964] This is a Exported Symbol into the table 2 1
...........
...........
[36058.216861] This is a Exported Symbol into the table 4 19
[36058.216931] This is a Exported Symbol into the table 4 18
[36058.217001] This is a Exported Symbol into the table 4 17
[36058.217070] This is a Exported Symbol into the table 4 16
[36058.217140] This is a Exported Symbol into the table 4 15
[36058.217210] This is a Exported Symbol into the table 4 14
[36058.217279] This is a Exported Symbol into the table 4 13
[36058.217349] This is a Exported Symbol into the table 4 12
[36058.217419] This is a Exported Symbol into the table 4 11
[36058.217488] This is a Exported Symbol into the table 4 10
[36058.217558] This is a Exported Symbol into the table 4 9
[36058.217628] This is a Exported Symbol into the table 4 8
[36058.217698] This is a Exported Symbol into the table 4 7
[36058.217767] This is a Exported Symbol into the table 4 6
[36058.217837] This is a Exported Symbol into the table 4 5
[36058.217907] This is a Exported Symbol into the table 4 4
[36058.217976] This is a Exported Symbol into the table 4 3
[36058.218046] This is a Exported Symbol into the table 4 2
Without the use of the semaphore I was expecting that the dmesg kernel logs will be mixed from both the modules as they access the code inconsistently, but looks like the insmod process
loads the modules only one after the another.
What am I missing here??
Thanks.

Related

creating base table with multiple column families

I am new to hbase. I am using HBase version 1.1.2 on Microsoft Azure. I have data that looks like this
id num1 rating
1 254 2
2 40 3
3 83 1
4 120 1
5 91 5
6 101 2
7 17 1
8 10 2
9 11 3
10 31 1
I tried to create a table with two colum families of the form
create 'table1', 'family1', 'family2'
when I loaded my table
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv \
-Dimporttsv.columns="HBASE_ROW_KEY,family1:num1, family2:rating" table1 /metric.csv
I got the error
Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 5560 actions: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family family2 does not exist in region table1
when I modified my table with one column family it worked
create 'table1', 'family1'
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv \
-Dimporttsv.columns="HBASE_ROW_KEY,family1:num1, family1:rating" table1 /metric.csv
How do I adjust my table creation to account for multiple column families?
HBase ImportTsv internally uses PUT operations to load the data into HBase tables.
PUT only supports loading into single column family at a time
Here Here and from Documentation

Using loop to insert values from one table to another

I have the following table named screening_plan:
plan_id movie_id plan_start_day plan_end_day plan_min_start_hh24 plan_max_start_hh24 screenings
1 1 1/06/2015 28/06/2015 9 17 5
2 2 1/06/2015 28/06/2015 9 22 4
3 3 1/06/2015 28/06/2015 9 22 5
4 4 1/06/2015 28/06/2015 9 17 4
And another tables theatre:
THEATRE_ID THEATRE_DESCRIPTION THEATRE_TOTAL_ROWS
1 2
2 2
3 3
4 2
There is a total of 18 screenings per day. I have to insert the details in the screening table as follows:
screening_id plan_id theatre_id screening_date screening_start_hh24 screening_start_mm60
1 1 3 1/06/2015 9 0
2 1 3 1/06/2015 11 30
3 1 3 1/06/2015 14 0
4 1/06/2015
plan_id is a foreign key referring table screening and theatre_id is a foreign key referring table theatre.
Each movie should be screened as per the screening number is defined
in the table screening_plan.
There is a break of 30 minutes between 2 consecutive screenings in
the same theatre.
The screening_start_hh24 should be less than plan_max_start_hh24.
Please note that the number of screenings for the first movie won't
fit into the provided time interval,so the second screening should be done
in an alternate theatre(preferably in theatre_id=2 starting from 11:30).
Each movie has a lenght of 2 hours.
I am stuck with this since yesterday. Tried doing it using the If-Else block, but that requires defining every condition. How can I do this using a loop?Please help.
My code(I have skipped the declaration part here):
BEGIN
SELECT plan_id INTO s_plan_id FROM screening_plan WHERE plan_id=1;
SELECT theatre_id INTO s_theatre_id FROM theatre WHERE theatre_id=1;
SELECT PLAN_START_DATE INTO s_screening_date FROM screening_plan WHERE plan_id=1;
SELECT Count(*) INTO s_count_theatre_id FROM screening;
IF s_count_theatre_id = 0
THEN
s_screening_start_hh24:=9;
s_screening_start_mm60:=0;
ELSIF s_count_theatre_id >0 AND s_count_theatre_id <=4
THEN
s_screening_start_hh24:=11 ;
s_screening_start_mm60:=30 ;
ELSE
Dbms_Output.put_line('---');
END IF;
INSERT INTO screening (plan_id, theatre_id, screening_date, screening_start_hh24, screening_start_mm60)
VALUES( s_plan_id,
s_theatre_id,
s_screening_date,
s_screening_start_hh24,
s_screening_start_mm60);
END;
There should be a total of 18 record in the table screening.5 for movie_id=1, 4 for movie_id=2, 5 for movie_id=3 and 4 for movie_id=3.

How to load data into couple of tables from a single files with different records structure in Hive?

I have a single file with a structure like:
A 1 2 3
A 4 5 6
A 5 8 12
B abc cde
B and fae
B bsd oio
C 1
C 2
C 3
and would like to load the data in 3 simple tables (A (int int int), B(string string) C(int)).
Is it possible and how?
It's also fine for me, if A(string int int int) etc. with the first column of the file to be included in the table.
I'd go with option 1 as Praveen suggests. I'd create an external table of only a string, and use the FROM ( ... ) syntax to insert into multiple tables at once. I think something like the following would work
create external table source_table( line string )
stored as textfile
location '/myfile';
from ( select split( line , " ") as col_array from source_table ) cols
insert overwrite table A select col_array[1], col_array[2], col_array[3] where col_array[0] = 'A'
insert overwrite table B select col_array[1], col_array[2] where col_array[0] = 'B'
insert overwrite table C select col_array[1] where col_array[0] = 'C';
Option 1) Map the entire data to a Hive table and then use the insert overwrite table .... option to map the appropriate data to the target tables.
Option 2) Develop a MR program to split the file into multiple files and then do the mapping of the files to the target tables in Hive.

Getting the timestamp of a file using PL/SQL

I am trying to get the timestamp of a text file abc.txt in some directory XYZ in oracle server. This file can get updated at any time in the day and i have to check if the file was updated any time after yesterday midnight, if yes i need to email that file as an attachment.
Is there any other way i can check this?
I have searched a lot over internet but could not find the solution.
Seriously not getting a clue of how to get it done.
It would be great if anyone could guide me.
Thanks.
I think you will have to do this by writing a java procedure, as described here by Tom Kyte:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:439619916584
GRANT JAVAUSERPRIV to <your user>
/
create global temporary table DIR_LIST
( filename varchar2(255) )
on commit delete rows
/
create or replace and compile java source named "DirList"
as
import java.io.*;
import java.sql.*;
public class DirList
{
public static void getList(String directory)
throws SQLException
{
File path = new File( directory );
String[] list = path.list();
String element;
for(int i = 0; i < list.length; i++)
{
element = list[i];
#sql { INSERT INTO DIR_LIST (FILENAME)
VALUES (:element) };
}
}
}
/
create or replace procedure get_dir_list( p_directory in varchar2 )
as language java
name 'DirList.getList( java.lang.String )';
/
Another approach might be to make use of the preprocessor directive for external tables.
Please have a look at Mr. Kyte's article in Nov/Dec 2012 Oracle Magazine. He is playing with unix df, you can play with unix ls or windows dir.
http://www.oracle.com/technetwork/issue-archive/2012/12-nov/o62asktom-1867739.html
SQL> create table df
2 (
3 fsname varchar2(100),
4 blocks number,
5 used number,
6 avail number,
7 capacity varchar2(10),
8 mount varchar2(100)
9 )
10 organization external
11 (
12 type oracle_loader
13 default directory exec_dir
14 access parameters
15 (
16 records delimited
17 by newline
18 preprocessor
19 exec_dir:'run_df.sh'
20 skip 1
21 fields terminated by
22 whitespace ldrtrim
23 )
24 location
25 (
26 exec_dir:'run_df.sh'
27 )
28 )
29 /
Table created.

Copying data from LOB Column to Long Raw Column

I was looking for a query which picks data from a table having Blob column and update a table having LONG RAW column. It seems Oracle supports only up to 4000 characters. Is there a way to copy full data from blob to long raw.
I was using the follwing query
insert into APPDBA.QA_SOFTWARE_DUMMY
select SOFTWARE_ID, UPDATED_BY, CREATE_CHANGE_DATE, FILE_NAME,
DBMS_LOB.SUBSTR(SOFTWARE_FILE, 4000) SOFTWARE_FILE, SOFTWARE_TYPE
from APPDBA.QA_SOFTWARE_DUMMY_TEST ;
but DBMS_LOB.SUBSTR supports only upto 4000 characters.
Any help is highly appreciated.
PL/SQL will only read/write the first 32k of a LONG RAW and SQL will convert the column as a RAW so will only deal with the first 2000 bytes.
You can use java to access LONG RAW columns directly from the DB, as demonstrated in the question "Get the LENGTH of a LONG RAW".
Here's a little example, first the setup:
SQL> CREATE TABLE t (ID NUMBER PRIMARY key, source BLOB, destination LONG RAW);
Table created
SQL> DECLARE
2 l_lob BLOB;
3 BEGIN
4 INSERT INTO t VALUES (1, 'FF', '') RETURNING SOURCE INTO l_lob;
5 FOR i IN 1..10 LOOP
6 dbms_lob.writeappend(l_lob, 4000,
7 utl_raw.overlay('FF', 'FF', 1, 4000, 'FF'));
8 END LOOP;
9 END;
10 /
PL/SQL procedure successfully completed
The java class:
SQL> CREATE OR REPLACE AND COMPILE JAVA SOURCE NAMED "Raw" AS
2 import java.io.*;
3 import java.sql.*;
4 import oracle.jdbc.driver.*;
5
6 public class Raw {
7
8 public static void updateRaw(int pk) throws SQLException,IOException {
9
10 Connection conn = new OracleDriver().defaultConnection();
11
12 PreparedStatement ps = conn.prepareStatement
13 ( "SELECT dbms_lob.getlength(source) length, source "
14 + "FROM t WHERE id = ? FOR UPDATE");
15 ps.setInt( 1, pk);
16 ResultSet rs = ps.executeQuery();
17
18 rs.next();
19 int len = rs.getInt(1);
20 InputStream source = rs.getBinaryStream(2);
21 byte[] destArray = new byte[len];
22 int byteRead = source.read(destArray);
23 ps = conn.prepareStatement(
24 "UPDATE t SET destination = ? WHERE id = ?");
25 ((OraclePreparedStatement) ps).setRAW(1,
26 new oracle.sql.RAW(destArray));
27 ps.setInt(2, pk);
28 ps.execute();
29 }
30 }
31 /
Java created
You can call this procedure from PL/SQL:
SQL> CREATE OR REPLACE
2 PROCEDURE update_raw(p_id NUMBER)
3 AS LANGUAGE JAVA NAME 'Raw.updateRaw(int)';
4 /
Procedure created
SQL> exec update_raw(1);
PL/SQL procedure successfully completed
Despite the fact that you make a reversal (normaly you should move from LONG to LOB, LONG being obsolete)...
You must use dbms_lob package, and make some plsql:
Eventualy you can use read, getlength...
Doc you can find here Psoug.org
or on Oracle doc

Resources