Which MIB do I need if I am trying to find the routing table of a Linksys WRT54G with openWRT installed on it?
That should be MIB II. The SNMP ObjectID (OID) is .1.3.6.1.2.1.4.21 which translates to ip.ipRouteTable. (This works for me on Windows, so I see no reason why it should be different for your Linksys/openWRT system.)
Here is the hierarchy table for the routing table leaf:
1 - ISO assigned OIDs
1.3 - ISO Identified Organization
1.3.6 - US Department of Defense
1.3.6.1 - OID assignments from 1.3.6.1 - Internet
1.3.6.1.2 - IETF Management
1.3.6.1.2.1 - SNMP MIB-2
1.3.6.1.2.1.4 - ip
1.3.6.1.2.1.4.21 - ipRouteTable
Related
Am trying to see if there is a way to get the output of "show security policies hit-count descending" via SNMP,
it has the following o/p like so,
show security policies hit-count descending
node0:
--------------------------------------------------------------------------
Logical system: root-logical-system
Index From zone To zone Name Policy count
1 WIFI-DEVICETEST UNTRUST allow-internet-traffic-only 284046
2 USERS UNTRUST allow-all 273438
3 AV UNTRUST allow-media-to-internet 187757
The closest MIB i came to where info related to security policies were is "jnxJsSecPolicyMIB" but Iam unable to figure out how to get the hit count using the MIB.
I want to set conflict domain(incompatibility) on INV org level for Concurrent prog in Oracle Apps.
Suppose we have three orgs A,B,C my concurrent prog name is xyz
xyz should run for all three orgs at a time but not for A and A or B and B or C and C
It is possible as per Oracle AOL document.
Conflict Domains
In Oracle Applications, data is stored in database tables that belong to a particular application. Each table may also contain information used to determine what conditions need to be met to access the individual records. These conditions may consist of one or more of the following data groupings:
SOB - based on the profile option
GL_SET_OF_BOOKS Multiple installations (referred to as MSOB) Multiple Operating units
(determined by profile option MO_OPERATING_UNIT) (referred as MULTIORG).
Multiple Orgs (determined by profile option INV_ORGANIZATION_ID, Used by Manufacturing Applications)
HR may use business group as a conflict resolution domain FA may use FA book
etc...
All programs are assigned a conflict domain when they are submitted. If a domain is defined as part of a parameter the concurrent manager will use it to resolve incompatibilities. If the domain is not defined by a parameter the concurrent manager uses the value defined for the profile option Concurrent:Conflicts Domain. Lastly, if the domain is not provided by a program parameter and the Concurrent:Conflicts Domain profile option has not been defined the 'Standard' domain is used. The Standard domain is the default for all requests.
All programs use the Standard conflict domain unless a value is defined for the profile option Concurrent:Conflicts Domain or a conflict domain is defined through a program parameter.
You can refer the following links for further details.
https://docs.oracle.com/cd/A60725_05/html/comnls/us/fnd/incomp01.htm
https://docs.oracle.com/cd/A60725_05/html/comnls/us/fnd/incomp02.htm
Regards,
Sivabalanarayanan L
We are trying to set up cache expiration in Pivotal Cloud Cache, using Gemfire. We have set up our region in PCF:
Cluster-0 gfsh>describe region --name=/CartTest
Type | Name | Value
------ | ----------------------- | ---------
Region | data-policy | PARTITION
| entry-idle-time.timeout | 60
| size | 0
| statistics-enabled | true
| entry-idle-time.action | DESTROY
When we create our Cart object, it is written to the cache (we can
see it in the size entry above).
If we access our object from our code, it does not seem to be updating the access time for the entry. For instance:
#11:00:00 - create entry
#11:00:30 - access entry
#11:01:00 - entry is gone
I would have expected the entry to still exist until 11:01:30 (I'm using ridiculously short timeouts just for testing). The idle time almost seems to be acting just like Time-To-Live. When we look at the lastAccessTime for the region using gfsh, it is not being updated.
Any idea what I'm doing wrong here?
Few things to verify.
Can you please share code showing how you store data in PCC regions ?
Is the region name correct ? Since you are using region CarTest in gfsh your #Region annotation (assuming you are using spring-data-gemfire on the client side) should also be using CarTest region name.
Easy way to put data using SDG (spring-data-gemfire) is via Spring Data Repository abstraction.
Please refer sample application here. Specifically domain class can be created like here and repository can be created like here
CORRECTION: The reason the lastAccessedTime was not being updated was because we were not getting the entry via the ID field, we were searching on two other fields in the object. When we took those two fields and created a composite key, and made it the #Id field, then the time was updated when we retrieved the object.
With partitioned Gemfire regions, any access to a secondary partition does not update the lastAccessedTime of the primary. So this won't do what we want, we'll need to add some code.
I need to design the structure of the tables with the product data to meet the following requirements:
1. A product consists of the following fields: EAN, CN, description, pvp
2. There are several types of users that access the products. The user types are not stable, they can be added or deleted at any time.
3. Any of the fields of the products may vary depending on the type of user who views it. For example:
We have three users:
1 - John - guest
2 - David - client
3 - Vicent - vip
We have this product with this data by default:
8470001234567 - 123456 - Iphone X - $799
Guest users instead of seeing this data see the following:
8431239876547 - 987654 - Iphone X - $849
The client users see the data by default and the vip users see:
8470001234567 - 654321 - Iphone X Sale - $709
This means that a user sees the default data of a given product unless there is an exception for its type. The exception can affect any field (except the id).
I can think of this structure:
PRODUCTS: id, ean, cn, description, pvp
PRODUCT_EXCEPTION: product_id, user_type_id, ean, cn, description, pvp
I have verified that with this structure many queries are made, do you think of a way to optimize this so that it is not necessary to make so many queries?
Note 1: the products are contained within offers that have a certain number of products.
Note 2: I use Laravel. The Offer model has a relationship with products, that is, I obtain the products in the following way: $offer->products()
Looking at just the problem description, I ended up with this, which is equivalent to what you are proposing with your Product_Exception:
Reasoning:
Since user types vary, you cannot put the values directly in the Product table. Ex. 3 user types == 3 prices. Not here. So you need a link table between Product and UserType.
The link table will contain the characteristics of 1 Product, for 1 UserType.
If you want to have a default value, you can put the characteristics in the Product table as well. But then your queries become bigger! Check if there is an exception value, if not use the default.
So your solution makes sense to me.
We currently have a system that have contacts in it. It will be mapped to the contact entity in Dynamics. Each of the contacts have an address history (Yes, some of them moved in the last 20 years).
We are soon to import the old system in Dynamics, and I am wondering how can I import a contact address history. Let assume I have user 'John' :
| Name | Address | LivedThereFrom | LivedThereTo |
-----------------------------------------------------
| John | 123 X road | 2005 | 2008 |
| John | 123 Y road | 2008 | 2010 |
| John | 123 Z road | 2010 | | ==> Current address
So I will import 'John', then (with audit activated on addresses) import his address from 2005 to 2008, then update it's adresse to '123 Y road', then finally update it to '123 Z road' to have the full history available in Audit.
The problem is the following : How can I 'tag' those adresse from 2005 to 2008, 2008 to 2010 and then 'current'... I thought of using the 'created_on' field in the Audit table to help me, but there seems to be no way to modify that data (except going directly in the database and lose Microsoft's support on the product).
Thanks
If the client insists on through the product, then then another option is to import the addresses multiple times.
Setup
a) Turn on Auditing for the entity.
b) Add a new field to store the Date in which the Address was first used (Address Commencement Date)
Import
a) Import all Customer's oldest addresses first, set the Address Commenecement Date to the LivedThereFrom date
b) Export the addresses and make it available for re-importing.
c) Update the valyues in the spreadsheets based on the next available LivedThereFrom for that Customer. Hint: Convert the exported spreadsheet to XLSX first. Use formulas to work out which ones to replace.(e.g. VLOOKUP) If really large spreadsheet then may need to split to enable re-importing under 5MB limit. Could use ROW_NUMBER function in SQL to get a list of next addresses to update from your source system.
d) Continue doing c) until all records are loaded
In the Audit history you will see the the date which the new Address became active with the Address Commencement Date.
If they don't mind the direct database approach I'd go straight for the audit tables
Good luck.
One way that doesn't involve changing the way Dynamics handles contact addresses or doing any unsupported stuff is to make a custom entity the job of which is to store all of your contact's address history. It could contain the address, the dates between which that contact lived there, and (importantly) an N:1 or N:N relationship between your new address entity (say new_Address) and the Contact entity that links the two together.
A drawback is once this is done, you probably would have to introduce some business logic to sync the contact's address records to this new entity, which is certainly possible within Dynamics, and perhaps at some level, unavoidable.