Vaadin table column order is getting changed automatically - vaadin8

I have created a table which has few columns in it. i have given header names for each column. After table initialization i see that column are not appearing in the order in which they are added in table. Column names are reordered alphabetically. Please find below the code of the same.
private Table createGridTable() {
Table grid = new FilterAndPagedTable(this.generateTableOptions());
grid.addContainerProperty("previousPeriod", String.class, null);
grid.addContainerProperty("prevchannelA", Float.class, null);
grid.addContainerProperty("prevchannelB", Float.class, null);
grid.addContainerProperty("prevchannelC", Float.class, null);
grid.addContainerProperty("prevchannelD", Float.class, null);
grid.addContainerProperty("prevAllChannelCons", Float.class, null);
grid.addContainerProperty("presentPeriod", String.class, null);
grid.addContainerProperty("presentchannelA", Float.class, null);
grid.addContainerProperty("presentchannelB", Float.class, null);
grid.addContainerProperty("presentchannelC", Float.class, null);
grid.addContainerProperty("presentchannelD", Float.class, null);
grid.addContainerProperty("presentAllChannelCons", Float.class, null);
grid.addContainerProperty("diffOfPrevNPresent", Float.class, null);
grid.addContainerProperty("percentageChangeOfPrevNPresent", String.class, null);
grid.setColumnHeader("previousPeriod", PrevYearConstants.PREVIOUS_PERIOD);
grid.setColumnHeader("prevchannelA", IemsConstants.A_DC_Power);
grid.setColumnHeader("prevchannelB", IemsConstants.B_Essential_Cooling);
grid.setColumnHeader("prevchannelC", IemsConstants.C_UPS_Power);
grid.setColumnHeader("prevchannelD", IemsConstants.D_Non_Essential_Cooling);
grid.setColumnHeader("prevAllChannelCons", "A + B + C + D");
grid.setColumnHeader("presentPeriod", PrevYearConstants.PRESENT_PERIOD);
grid.setColumnHeader("presentchannelA", IemsConstants.A_DC_Power);
grid.setColumnHeader("presentchannelB", IemsConstants.B_Essential_Cooling);
grid.setColumnHeader("presentchannelC", IemsConstants.C_UPS_Power);
grid.setColumnHeader("presentchannelD", IemsConstants.D_Non_Essential_Cooling);
grid.setColumnHeader("presentAllChannelCons", "A + B + C + D");
grid.setColumnHeader("diffOfPrevNPresent", PrevYearConstants.DIFFERENCE);
grid.setColumnHeader("percentageChangeOfPrevNPresent", PrevYearConstants.PERCENTAGE);
System.out.println(grid.isSortAscending());
System.out.println(grid.isSortEnabled());
grid.setVisibleColumns(new Object[] { "previousPeriod", "prevchannelA", "prevchannelB", "prevchannelC",
"prevchannelD", "prevAllChannelCons", "presentPeriod", "presentchannelA", "presentchannelB",
"presentchannelC", "presentchannelD", "presentAllChannelCons", "diffOfPrevNPresent",
"percentageChangeOfPrevNPresent" });
grid.setSizeFull();
return grid;
}
public FilterAndPagedTableOptions generateTableOptions() {
FilterAndPagedTableOptions fptOptions = new FilterAndPagedTableOptions();
fptOptions.setCollapseAllowed(false);
fptOptions.setColumnReorderingAllowed(false);
fptOptions.setSortAllowed(false);
fptOptions.setShowInbuiltFilterBar(false);
return fptOptions;
}
i am loading data in table as below
List<PrevYearConsumption> tableContainer = viewElements.get(PrevYearConstants.PREV_YEAR_TABLE_CONFIG)
.getTableContainer();
this.grid.setPageLength(tableContainer.size());
BeanItemContainer container = new BeanItemContainer<PrevYearConsumption>(PrevYearConsumption.class);
container.addAll(tableContainer);
this.grid.setContainerDataSource(container);
The order of columns in table is not coming the way i have added in table's header. It is coming in random manner.
I am using vaadin 8.
Kindly help here .
Let me know in case any further info is required.

Is this pure Vaadin 8 or is it Vaadin 7 or is it Vaadin 8 with compatibility packages?
i would try to invoke
grid.setVisibleColumns(new Object[] { "previousPeriod", "prevchannelA", "prevchannelB", "prevchannelC",
"prevchannelD", "prevAllChannelCons", "presentPeriod", "presentchannelA", "presentchannelB",
"presentchannelC", "presentchannelD", "presentAllChannelCons", "diffOfPrevNPresent",
"percentageChangeOfPrevNPresent" });
after loading the data in the table.

Related

Wrong number or types of arguments when calling a procedure in WCF service

I'm sort of having a hard time with this one. Well ok, I have two different solutions (solution1 has a WebApplication Project; solution2 has a Website Project). Inside the two solutions, there's a WCF service structure. I have the exact same code in both services (in their respective solutions). My code compiles just fine. From the service I do a simple call to a procedure that returns a cursor. When I execute the service from the WebApplication it works just fine; when I do the same from the Website I get error: "wrong number or types of arguments". They both call the same procedure, in the same DB. And I triple check my code, and is the same in both services. Any ideas or suggestions? My code is as follows in both solutions:
Service.cs
public List<A1001310> SearchClient_A1001310()
{
DataTable dataTable = new DataTable();
dataTable = DataManager.SearchClient();
List<A1001310> list = new List<A1001310>();
list = (from DataRow dr in dataTable.Rows
select new A1001310()
{
Id = Convert.ToInt32(dr["CLIENT_ID"]),
//ClientName = dr["NOM_CLIENTE"].ToString()
}).ToList();
return list;
}
DataManager.cs
public static DataTable SearchClient()
{
try
{
using (OleDbCommand cmd = new OleDbCommand(packetName + ".select_A1001310"))
{
cmd.CommandType = CommandType.StoredProcedure;
SqlManager sqlManager = new SqlManager();
return sqlManager.GetDataTable(cmd);
}
}
catch (Exception ex)
{
//TODO; Handle exception
}
return null;
}
The call to DataTable is:
public DataTable GetDataTable(OleDbCommand cmd)
{
using (DataSet ds = GetDataSet(cmd))
{
return ((ds != null && ds.Tables.Count > 0) ? ds.Tables[0] : null);
}
}
public DataSet GetDataSet(OleDbCommand cmd)
{
using (DataSet ds = new DataSet())
{
this.ConvertToNullBlankParameters(cmd);
using (OleDbConnection conn = new OleDbConnection(cmd.Connection == null ? _dbConnection : cmd.Connection.ConnectionString))
{
cmd.Connection = conn;
cmd.CommandTimeout = _connTimeout;
conn.Open();
//cmd.ExecuteScalar();
using (OleDbDataAdapter da = new OleDbDataAdapter(cmd))
da.Fill(ds);
}
return ds;
}
}
The procedure is as follow:
PROCEDURE select_A1001310(io_cursor OUT lcursor_data)
AS
BEGIN
OPEN io_cursor FOR
--
SELECT client_id
FROM a1001310
WHERE status = 'A'
--
EXCEPTION
WHEN OTHERS THEN
IF io_cursor%ISOPEN THEN
CLOSE io_cursor;
END IF;
--REVIRE: EXCEPTION HANDLER
END select_A1001310;
So if it helps anyone, I resolved my issue by specifying the OUT parameter declared in the procedure. This resulted in me changing from Oledb to OracleClient as follow:
public static DataTable SearchClient()
{
string connection = ConfigurationManager.ConnectionStrings["DBConnection_Oracle"].ToString();
string procedure = packetName + ".p_search_client";
OracleParameter[] parameters = new OracleParameter[1];
parameters[0] = new OracleParameter("io_cursor", OracleType.Cursor, 4000, ParameterDirection.Output, true, 0, 0, "", DataRowVersion.Current, String.Empty);
DataTable dt = new DataTable();
dt = DataManager_Oracle.GetDataTable_(connection, procedure, parameters);
return dt;
}
It seems that on the Website environment it didn't like leaving out the OUT parameter; whereas on the WebApplication I did not specify it, and it worked just fine... If some one know the why , PLEASE let me know :)

Last Reducer is running from last 24 hour for 200 gb of data set

Hi i have one mapreduce apllication that bulk loads data into HBase .
I have total 142 text files of total size 200gb.
My mapper gets completed within 5 minutes and all reducer also but last one is stuck at 100%.
Its taking very long time and running from past 24 hr .
I have one column family .
My row key is like below .
48433197315|1972-03-31T00:00:00Z|4 48433197315|1972-03-31T00:00:00Z|38 48433197315|1972-03-31T00:00:00Z|41 48433197315|1972-03-31T00:00:00Z|23 48433197315|1972-03-31T00:00:00Z|7 48433336118|1972-03-31T00:00:00Z|17 48433197319|1972-03-31T00:00:00Z|64 48433197319|1972-03-31T00:00:00Z|58 48433197319|1972-03-31T00:00:00Z|61 48433197319|1972-03-31T00:00:00Z|73 48433197319|1972-03-31T00:00:00Z|97 48433336119|1972-03-31T00:00:00Z|7
I have created my table like this .
private static Configuration getHbaseConfiguration() {
try {
if (hbaseConf == null) {
System.out.println(
"UserId= " + USERID + " \t keytab file =" + KEYTAB_FILE + " \t conf =" + KRB5_CONF_FILE);
HBaseConfiguration.create();
hbaseConf = HBaseConfiguration.create();
hbaseConf.set("mapreduce.job.queuename", "root.fricadev");
hbaseConf.set("mapreduce.child.java.opts", "-Xmx6553m");
hbaseConf.set("mapreduce.map.memory.mb", "8192");
hbaseConf.setInt(MAX_FILES_PER_REGION_PER_FAMILY, 1024);
System.setProperty("java.security.krb5.conf", KRB5_CONF_FILE);
UserGroupInformation.loginUserFromKeytab(USERID, KEYTAB_FILE);
}
} catch (Exception e) {
e.printStackTrace();
}
return hbaseConf;
}
/**
* HBase bulk import example Data preparation MapReduce job driver
*
* args[0]: HDFS input path args[1]: HDFS output path
*
* #throws Exception
*
*/
public static void main(String[] args) throws Exception {
if (hbaseConf == null)
hbaseConf = getHbaseConfiguration();
String outputPath = args[2];
hbaseConf.set("data.seperator", DATA_SEPERATOR);
hbaseConf.set("hbase.table.name", args[0]);
hbaseConf.setInt(MAX_FILES_PER_REGION_PER_FAMILY, 1024);
Job job = new Job(hbaseConf);
job.setJarByClass(HBaseBulkLoadDriver.class);
job.setJobName("Bulk Loading HBase Table::" + args[0]);
job.setInputFormatClass(TextInputFormat.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setMapperClass(HBaseBulkLoadMapperUnzipped.class);
// job.getConfiguration().set("mapreduce.job.acl-view-job",
// "bigdata-app-fricadev-sdw-u6034690");
if (HbaseBulkLoadMapperConstants.FUNDAMENTAL_ANALYTIC.equals(args[0])) {
HTableDescriptor descriptor = new HTableDescriptor(Bytes.toBytes(args[0]));
descriptor.addFamily(new HColumnDescriptor(COLUMN_FAMILY));
HBaseAdmin admin = new HBaseAdmin(hbaseConf);
byte[] startKey = new byte[16];
Arrays.fill(startKey, (byte) 0);
byte[] endKey = new byte[16];
Arrays.fill(endKey, (byte) 255);
admin.createTable(descriptor, startKey, endKey, REGIONS_COUNT);
admin.close();
// HColumnDescriptor hcd = new
// HColumnDescriptor(COLUMN_FAMILY).setMaxVersions(1);
// createPreSplitLoadTestTable(hbaseConf, descriptor, hcd);
}
job.getConfiguration().setBoolean("mapreduce.compress.map.output", true);
job.getConfiguration().setBoolean("mapreduce.map.output.compress", true);
job.getConfiguration().setBoolean("mapreduce.output.fileoutputformat.compress", true);
job.getConfiguration().setClass("mapreduce.map.output.compression.codec",
org.apache.hadoop.io.compress.GzipCodec.class, org.apache.hadoop.io.compress.CompressionCodec.class);
job.getConfiguration().set("hfile.compression", Compression.Algorithm.LZO.getName());
// Connection connection =
// ConnectionFactory.createConnection(hbaseConf);
// Table table = connection.getTable(TableName.valueOf(args[0]));
FileInputFormat.setInputPaths(job, args[1]);
FileOutputFormat.setOutputPath(job, new Path(outputPath));
job.setMapOutputValueClass(Put.class);
HFileOutputFormat.configureIncrementalLoad(job, new HTable(hbaseConf, args[0]));
System.exit(job.waitForCompletion(true) ? 0 : -1);
System.out.println("job is successfull..........");
// LoadIncrementalHFiles loader = new LoadIncrementalHFiles(hbaseConf);
// loader.doBulkLoad(new Path(outputPath), (HTable) table);
HBaseBulkLoad.doBulkLoad(outputPath, args[0]);
}
/**
* Enum of counters.
* It used for collect statistics
*/
public static enum Counters {
/**
* Counts data format errors.
*/
WRONG_DATA_FORMAT_COUNTER
}
}
There is no reducer in my code only mapper .
My ,mapper code is like this .
public class FundamentalAnalyticLoader implements TableLoader {
private ImmutableBytesWritable hbaseTableName;
private Text value;
private Mapper<LongWritable, Text, ImmutableBytesWritable, Put>.Context context;
private String strFileLocationAndDate;
#SuppressWarnings("unchecked")
public FundamentalAnalyticLoader(ImmutableBytesWritable hbaseTableName, Text value, Context context,
String strFileLocationAndDate) {
//System.out.println("Constructing Fundalmental Analytic Load");
this.hbaseTableName = hbaseTableName;
this.value = value;
this.context = context;
this.strFileLocationAndDate = strFileLocationAndDate;
}
#SuppressWarnings("deprecation")
public void load() {
if (!HbaseBulkLoadMapperConstants.FF_ACTION.contains(value.toString())) {
String[] values = value.toString().split(HbaseBulkLoadMapperConstants.DATA_SEPERATOR);
String[] strArrFileLocationAndDate = strFileLocationAndDate
.split(HbaseBulkLoadMapperConstants.FIELD_SEPERATOR);
if (17 == values.length) {
String strKey = values[5].trim() + "|" + values[0].trim() + "|" + values[3].trim() + "|"
+ values[4].trim() + "|" + values[14].trim() + "|" + strArrFileLocationAndDate[0].trim() + "|"
+ strArrFileLocationAndDate[2].trim();
//String strRowKey=StringUtils.leftPad(Integer.toString(Math.abs(strKey.hashCode() % 470)), 3, "0") + "|" + strKey;
byte[] hashedRowKey = HbaseBulkImportUtil.getHash(strKey);
Put put = new Put((hashedRowKey));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.FUNDAMENTAL_SERIES_ID),
Bytes.toBytes(values[0].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.FUNDAMENTAL_SERIES_ID_OBJECT_TYPE_ID),
Bytes.toBytes(values[1].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.FUNDAMENTAL_SERIES_ID_OBJECT_TYPE),
Bytes.toBytes(values[2]));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.FINANCIAL_PERIOD_END_DATE),
Bytes.toBytes(values[3].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.FINANCIAL_PERIOD_TYPE),
Bytes.toBytes(values[4].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.LINE_ITEM_ID), Bytes.toBytes(values[5].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.ANALYTIC_ITEM_INSTANCE_KEY),
Bytes.toBytes(values[6].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.ANALYTIC_VALUE), Bytes.toBytes(values[7].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.ANALYTIC_CONCEPT_CODE),
Bytes.toBytes(values[8].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.ANALYTIC_VALUE_CURRENCY_ID),
Bytes.toBytes(values[9].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.ANALYTIC_IS_ESTIMATED),
Bytes.toBytes(values[10].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.ANALYTIC_AUDITABILITY_EQUATION),
Bytes.toBytes(values[11].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.FINANCIAL_PERIOD_TYPE_ID),
Bytes.toBytes(values[12].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.ANALYTIC_CONCEPT_ID),
Bytes.toBytes(values[13].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.ANALYTIC_LINE_ITEM_IS_YEAR_TO_DATE),
Bytes.toBytes(values[14].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.IS_ANNUAL), Bytes.toBytes(values[15].trim()));
// put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
// Bytes.toBytes(HbaseBulkLoadMapperConstants.TAXONOMY_ID),
// Bytes.toBytes(values[16].trim()));
//
// put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
// Bytes.toBytes(HbaseBulkLoadMapperConstants.INSTRUMENT_ID),
// Bytes.toBytes(values[17].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.FF_ACTION),
Bytes.toBytes(values[16].substring(0, values[16].length() - 3)));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.FILE_PARTITION),
Bytes.toBytes(strArrFileLocationAndDate[0].trim()));
put.add(Bytes.toBytes(HbaseBulkLoadMapperConstants.COLUMN_FAMILY),
Bytes.toBytes(HbaseBulkLoadMapperConstants.FILE_PARTITION_DATE),
Bytes.toBytes(strArrFileLocationAndDate[2].trim()));
try {
context.write(hbaseTableName, put);
} catch (IOException e) {
context.getCounter(Counters.WRONG_DATA_FORMAT_COUNTER).increment(1);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
} else {
System.out.println("Values length is less 15 and value is " + value.toString());
}
}
}
Any help to improve the speed is highly appreciated .
Counter image
here`
I suspect that all records go into single region.
When you created empty table, HBase splitted key address space in even ranges. But because all actual keys share the same prefix, they all go into single region. That means that single region/reduce task does all the job and all others regions/reduce tasks do not do anything useful. You may check this hypothesis by looking at Hadoop counters: how many bytes slow reduce task read/wrote compared to other reduce tasks.
If this is the problem, then you need to manually prepare split keys and create table by using createTable(HTableDescriptor desc, byte[][] splitKeys. Split keys should evenly divide your actual dataset for optimal performance.
Example #1. If your keys were ordinary English words, then it would be easy to split table into 26 regions by first character (split keys are 'a', 'b', ..., 'z'). Or to split it into 26*26 regions by first two characters: ('aa', 'ab', ..., 'zz'). Regions would not be necessarily even, but this would be anyway better than to have only single region.
Example #2. If your keys were 4-byte hashes, then it would be easy to split table into 256 regions by first byte (0x00, 0x01, ..., 0xff) or into 2^16 regions by first two bytes.
In your particular case, I see two options:
Search for smallest key (in sorted order) and for largest key in your dataset. And use them as startKey and endKey to Admin.createTable(). This will work well only if keys are uniformly distributed between startKey and endKey.
Prefix your keys with hash(key) and use method in Example #2. This should work well, but you won't be able to make semantical queries like (KEY >= ${first} and KEY <= ${last}).
Mostly if a job is hanging at the last minute or sec, then the issue could be a particular node or resources having concurrency issues etc.
Small check list could be:
1. Try again with smaller data set. This will rule out basic functioning of the code.
2. Since most of the job is done, the mapper and reducer might be good. You can try the job running with same volume few times. The logs can help you identify if the same node is having issues for repeated runs.
3. Verify if the output is getting generated as expected.
4. You can also reduce the number of columns you are trying to add to HBase. This will relieve the load with same volume.
Jobs getting hanged can be caused due to variety of issues. But trouble shooting mostly consists of above steps - verifying the cause if its data related, resource related, a specific node related, memory related etc.

Hextoraw() not working with IN clause while using NamedParameterJdbcTemplate

I am trying to update certain rows in my oracle DB using id which is of RAW(255).
Sample ids 0BF3957A016E4EBCB68809E6C2EA8B80, 1199B9F29F0A46F486C052669854C2F8...
#Autowired
private NamedParameterJdbcTemplate jdbcTempalte;
private static final String UPDATE_SUB_STATUS = "update SUBSCRIPTIONS set status = :status, modified_date = systimestamp where id in (:ids)";
public void saveSubscriptionsStatus(List<String> ids, String status) {
MapSqlParameterSource paramSource = new MapSqlParameterSource();
List<String> idsHexToRaw = new ArrayList<>();
String temp = new String();
for (String id : ids) {
temp = "hextoraw('" + id + "')";
idsHexToRaw.add(temp);
}
paramSource.addValue("ids", idsHexToRaw);
paramSource.addValue("status", status);
jdbcTempalte.update(*UPDATE_SUB_STATUS*, paramSource);
}
This above block of code is executing without any error but the updates are not reflected to the db, while if I skip using hextoraw() and just pass the list of ids it works fine and also updates the data in table. see below code
public void saveSubscriptionsStatus(List<String> ids, String status) {
MapSqlParameterSource paramSource = new MapSqlParameterSource();]
paramSource.addValue("ids", ids);
paramSource.addValue("status", status);
jdbcTempalte.update(UPDATE_SUB_STATUS, paramSource);
}
this code works fine and updates the table, but since i am not using hextoraw() it scans the full table for updation which I don't want since i have created indexes. So using hextoraw() will use index for scanning the table but it is not updating the values which is kind of weird.
Got a solution myself by trying all the different combinations :
#Autowired
private NamedParameterJdbcTemplate jdbcTempalte;
public void saveSubscriptionsStatus(List<String> ids, String status) {
String UPDATE_SUB_STATUS = "update SUBSCRIPTIONS set status = :status, modified_date = systimestamp where id in (";
MapSqlParameterSource paramSource = new MapSqlParameterSource();
String subQuery = "";
for (int i = 0; i < ids.size(); i++) {
String temp = "id" + i;
paramSource.addValue(temp, ids.get(i));
subQuery = subQuery + "hextoraw(:" + temp + "), ";
}
subQuery = subQuery.substring(0, subQuery.length() - 2);
UPDATE_SUB_STATUS = UPDATE_SUB_STATUS + subQuery + ")";
paramSource.addValue("status", status);
jdbcTempalte.update(UPDATE_SUB_STATUS, paramSource);
}
What this do is create a query with all the ids to hextoraw as id0, id1, id2...... and also added this values in the MapSqlParameterSource instance and then this worked fine and it also used the index for updating my table.
After running my new function the query look like : update
SUBSCRIPTIONS set status = :status, modified_date = systimestamp
where id in (hextoraw(:id0), hextoraw(:id1), hextoraw(:id2)...)
MapSqlParameterSource instance looks like : {("id0", "randomUUID"),
("id1", "randomUUID"), ("id2", "randomUUID").....}
Instead of doing string manipulation, Convert the list to List of ByteArray
List<byte[]> productGuidByteList = stringList.stream().map(item -> GuidHelper.asBytes(item)).collect(Collectors.toList());
parameters.addValue("productGuidSearch", productGuidByteList);
public static byte[] asBytes(UUID uuid) {
ByteBuffer bb = ByteBuffer.wrap(new byte[16]);
bb.putLong(uuid.getMostSignificantBits());
bb.putLong(uuid.getLeastSignificantBits());
return bb.array();
}

How should I be using the Kentico TreeNode.Update method?

I am trying to run the attached code to update some data for a particular document type but it is not actually updating anything.
My currentDocumentNodeId() method is pulling back a NodeId based on some other criteria and then each of these Nodes that it is getting is of the type HG.DocumentLibraryItem which have the columns IsPublic, IsRepMining, IsRepPower, IsRepProcess, and IsRepFlexStream. But when I call the update method and then pull back those Columns in the SQL table for this Custom Document Type, the values are all Null. Each of those columns in the HG.DocumentLibraryItem document type are set to boolean I have tried using the Node.SetValue() method and setting it to true and 1; neither way works to update that column.
Any ideas what I am doing wrong? Am I doing the call correctly?
See my code below:
public static void GetDocumentAreaAssignments()
{
var cmd = new SqlCommand
{
CommandText ="This is pulling back 2 rows, one with Id and one with Text",
CommandType = CommandType.Text,
Connection = OldDbConnection
};
OldDbConnection.Open();
try
{
using (SqlDataReader rdr = cmd.ExecuteReader())
{
var count = 0;
while (rdr.Read())
{
try
{
var documentId = TryGetValue(rdr, 0, 0);
var areaAssignment = TryGetValue(rdr, 1, "");
var currentDocumentNodeId = GetNodeIdForOldDocumentId(documentId);
var node = currentDocumentNodeId == 0
? null
: Provider.SelectSingleNode(currentDocumentNodeId);
if (node != null)
{
switch (areaAssignment.ToLower())
{
case "rep mining":
node.SetValue("IsRepMining", 1);
break;
case "rep power":
node.SetValue("IsRepPower", 1);
break;
case "rep process":
node.SetValue("IsRepProcess", 1);
break;
case "rep flexStream":
node.SetValue("IsFlexStream", 1);
break;
case "public":
node.SetValue("IsPublic", 1);
break;
}
node.Update();
Console.WriteLine("Changed Areas for Node {0}; item {1} complete", node.NodeID,
count + 1);
}
}
catch (Exception ex)
{
}
count++;
}
}
}
catch (Exception)
{
}
OldDbConnection.Close();
}
The coupled data (as IsRepMining field) are only updated when you retrieve a node that contains them. To do that you have to use overload of the SelectSingleNode() method with a className parameter. However I'd recommend you to always use the DocumentHelper to retrieve documents. (It will ensure you work with the latest version of a document...in case of workflows etc.)
TreeProviderInstance.SelectSingleNode(1, "en-US", "HG.DocumentLibraryItem")
DocumentHelper.GetDocument(...)
DocumentHelper.GetDocuments(...)

Dart PowerSNMP GetTable does not return any record

I am using dart powerSNMP for .Net.
I am trying to query a table using GetTable(), it does not work for me.
Below C# code does not return any row,
const string address = "xxx.xxx.xx.x";
using (var mgr = new Manager())
{
var slave = new ManagerSlave(mgr);
slave.Socket.ReceiveTimeout = 13000;
try
{
//Retrieve table using GetNext requests
Variable[,] table = slave.GetTable("1.3.6.1.4.1.14823.2.2.1.1.1.9",
SnmpVersion.Three,
null,
new Security()
{
AuthenticationPassword = "mypassword1",
AuthenticationProtocol = AuthenticationProtocol.Md5,
PrivacyPassword = "mypassword2",
PrivacyProtocol = PrivacyProtocol.Des
},
new IPEndPoint(IPAddress.Parse(address), 161),
0);
}catch(Exception ez)
{
}
}
This is supposed to return a set of records from given OID. but It does not return me anything. When I use MIB Browser, I see GetBulk operation fetches all the records for me.
But what is wrong with GetTable() here?

Resources