i'm kind of confused with the relationship among ACTCHL, MAXCHL, MAXCHANNELS and MAXACTIVECHANNELS. Is property ACTCHL same with MAXACTIVECHANNELS, and MAXCHL with MAXCHANNELS? In addition, what's the default value for them? i can see either 100 or 200, not sure which one is correct.
Thanks
i just find ACTCL and MAXCHL are for z/OS, but MAXCHANNELS and MAXACTIVECHANNELS are for open system. It's clear. However, please help answer the default value question.
Oddly enough, the best reference for these is in the WMQ Explorer manual rather than the WMQ manual. The tables in the Queue Manager Properties topic list the settings, their default values and in some cases the relationship between them. When you get there use your browser search for the specific attributes (i.e. MAXACTIVECHANNELS) because the tales are organized based on tabs in WMQ Explorer rather than strict alphabetical sequence.
Related
I'm trying to help my team streamline a data ingestion process that is taking up a substantial amount of time. We receive data in multiple formats and with attributes arranged differently. Is there a way using RapidMiner to create a process that:
Processes files on a schedule that are dropped into a folder (this
one I think I know but I'd love tips on this as scheduled processes
are new to me)
Automatically identifies input filetype and routes to the correct operator ("Read CSV" for example)
Recognizes a relatively small number of attributes and arranges them accordingly. In some cases, attributes are named the same way as our ingestion format and in others they are not (phone vs phone # vs Phone for example)
The attributes we process mostly consist of name, id, phone, email, address. Also, in some cases names are split first/last and in some they are full name.
I recognize that munging files for such simple attributes shouldn't be that hard but the number of files we receive and lack of order makes it very difficult to streamline a process without a bit of automation. I'm also going to move to a standardized receiving format but for a number of reasons that's on the horizon and not an immediate solution.
I appreciate any tips or guidance you can share.
Your question is relative broad, so unfortunately I can't give you complete answer. But here are some ideas on how I would tackle the points you mentioned:
For a full process scheduling RapidMiner Server is what you are
looking for. In that case you can either define a schedule (e.g.,
check regularly for new files) or even define a web service to
trigger the process.
For selecting the correct operator depending on file type, you could
use a combination of "Loop Files" and macro extraction to get the
correct type and the use either "Branch" or "Select Subprocess" for
switching to different input routes.
The "Select Attributes" operator has some very powerful options to
select specific subsets only. In your example I would go for a
regular expression akin to [pP]hone.* to get the different spelling
variants. Also very helpful in that case would be the "Reorder
Attributes" operator and "Rename by Replacing" to create a common
naming schema.
A general tip when building more complex process pipelines is to organize your different tasks in sub-processes and use the "Execute Process" operator. This makes everything much more readable and maintainable. Also a good error handling strategy is important to handle unforeseen data formats.
For more elaborate answers and tips from many adavanced RapidMiner users, I also highly recommend the RapidMiner community.
I hope this gives a good starting point for your project.
I have several SSIS Packages that we use to load in data from multiple different OLE DB data sources into our DB. Inside each package we have several Data Flow tasks that hold a large amount of OLE DB Sources and Destinations. What I'm looking to do is see if there is a way to get a text output the holds all of the Destinations flow configurations (Sources would be good to but not top of my list).
I'm trying to make sure that all my OLE DB Destination flows are pointed at the right table, as I've found a few hiccups, without having to double click on each Flow task and check that way, it just becomes tedious and still prone to missing things.
I'm viewing the packages in Visual Studio 2013. Any help is appreciated!
I am not aware of any programmatic ways to discover this data, other than building an application to read the XML within the *.dtsx package. Best advice, pack a lunch and have at it. I know for sure that there is nothing with respect to viewing and setting database tables (only server connections).
Though, a solution I may add once you have determined the list: create a variable(s) to store the unique connection strings and then set those connection strings inside the source/destination components. This will make it easier to manage going forward. In fact, you can take it one step further by setting the same values as parameter, as opposed to variables, which have the added benefit of being exposed on the server. This allows either you or the DBA to set the values as you promote through environments or change server nodes.
Also, I recommend rationalizing that solution into smaller solutions if possible. In my opinion, there is nothing worse than one giant solution that tries to do it all. I am not sure if any of this is helpful, but for what its worth, I do hope it helps.
You can use the SSIS Object Model for your needs..An example can be found here. Look in the method IterateAllDestinationComponentnsInPackage for the exact details. To start understanding the code, start in the Start method and follow the path.
Caveats: Make sure you use the appropriate Monikers and Class IDs for the Data Flow Tasks and your Destination Components. You can also use this for other Control Flow Tasks and Data Flow Components (for example, Source Components as your other need seems to be). Just keep in mind the appropriate Monikers and Class IDs.
I work regularly on an OBIEE repository with 100+ facts and 200+ logical table sources.
Every time I add a new dimension, I have to go one-by-one to set the logical level for that dimension in each and every table source.
Is there any way, when adding a dimension, to default all LTS to a specific logical level?
Same answer as for cross-post here:
Is there any way, when adding a dimension, to default all LTS to a specific logical level?
No.
A model this size is probably not the best approach in an ideal world.
The only option you have is to look at a script-based approach. There is a supported API for modifying the RPD metadata, (and here too), but it'd be pretty easy to screw things up, doubly so given the size and complexity of your existing RPD.
You can see an example of it in action in a blog I wrote here. Note that all I change in that blog post is the value of an existing repository variable. To add in additional content, with the issue of GUIDs etc, gets very hairy indeed.
tl;dr : sit tight, and keep clicking, unless you're feeling brave
I am using some SNMP traps for monitoring of applications. Now I was told that some monitoring systems might have problems if the order of the attributes within the the traps was not the same as defined in the MIB. From the Complexity of the OIDs that could easily be used to re-order the attributes I was surprised by this, so I tried to find the relevant section of the RFC, but I could neither find something that said any ordering is allowed nor anything that said it is important. In other secondary documentation about SNMP I was not able to find anything usefull either.
So this is more a curiosity question, that could however also help in further projects using SNMP. Could anyone point me to the correct documentation as far as this problem is concerned. Or is this something that one software might handle while other software might not handle this and I should check the actual documentation for that software?
I found the relevant document.
Section 3.1.2 specifies:
The VARIABLES clause, which need not be present, defines the
ordered sequence of MIB objects which are contained within
every instance of the trap type. Each variable is placed, in
order, inside the variable-bindings field of the SNMP Trap-
PDU. Note that at the option of the agent, additional
variables may follow in the variable-bindings field.
Thanks to the person who pointed this out to me.
It happens sometimes that one feature seems to belong to more than one place.
Trivial example, let's say I've got the following menus :
File
Pending orders
Accepted orders
Tools
Help
I've got a search feature, and the same search window work for both pending and accepted orders (it's just an 'order status' combo you can change)
Where does this search feature belongs?
The Tools menu seems to be a good choice, but I'm afraid the users may expect the search accepted orders to be in the accepted orders menu, which would make sense
Duplicating the menu entry in both pending and accepted order seems wrong to me.
What would you do? (And let's pretend we cannot merge the two orders menu into one single menu)
I think the problem you've run into is that you're thinking like a programmer. (code duplication bad). I'm not faulting you for it, I do the same thing. Multiple paths to the same screen, or multiple ways to handle the same process can actually be extremely beneficial. I would guess that more than one person is going to use your program and each probably have slightly different job functions. In essence, they have different needs for the application and will approach using it different ways. If you stick to the all items have one way of being accessed, some people will find the application beneficial and others won't. Sure all people can learn to do a task a certain way, but it won't make sense to some users. It's not intuitive (read familiar) to they way they are used to processing information, which means the application will ultimately be less beneficial to them. When people find a process (program etc.) frustrating, they won't adopt it. They find reasons why the process will need to be changed or abandoned.
An excellent example of the multiple approaches to a problem is with Adobe Photoshop. Normally there are at least 2 different ways to access a function. Most users only know of one, because that's all they are concerned with, but most users are really comfortable with using one, because it makes the most sense to them. With a little extra work, Adobe scored a huge win, because more people find their product intuitive.
Having a feature in multiple locations is not a bad thing. Consider the overall workflow for viewing both pending orders and accepted orders, and think of your new feature as a component, rather than a one-off entity.
After you map out exactly what tasks a user completes in the pending and accepted order viewing process, see where having the ability to search would provide value (by shortening the workflow or otherwise). This is where your search component belongs.
The main thing to remember about UI is that all that really matters in the end is whether your design makes using your application or site a better experience for your users.
In the search example you list above you'll commonly see apps take two approaches:
Put the search feature in a single location and allow the user to filter the search by selecting pending or accepted, or
Put the search feature in both menus, already configured for the type of search to be done based on the menu it was launched from.
If you repeat the above choice for a number of factors you'll see a much more advanced (aka 'complicated') search interface for number one, and a much simpler (aka 'restrictive') search interface for number two.
Which one is best completely depends on your users. This is why many general applications have a simple search by default and a link to a more advanced search for those that want or need the additional capabilities; they're attempting to make everyone happy. There is absolutely nothing wrong with that if you're writing for a wide variety of people with different needs. If you're writing for a set of users with a restricted set of needs however, you can make some better choices.
In my experience your best bet is to work with one or two of your primary users and map out all of the steps they need to take to get each of the tasks the application will be helping them with accomplished. If there aren't a lot of branching points in that sequence of steps there shouldn't be a lot of choices or settings to make in the application; otherwise the users may feel that the app is harder to work with than it needs to be.
For the search example above, if the user has already navigated into the Pending Orders menu, the likelihood that they'll want to launch a search for Accepted Orders is very small and having to make that choice, or go elsewhere to do the search, will be an extra decision or action they'll need to take. Basic principle is if your user has already made a decision, use it; don't make them tell you again.
Use the UI you come up with as a first cut. Let your users, or a subset of them, try it out and make suggestions. If you have the option, watch them use it. You'll learn far more about how to improve the interface by seeing how they work with it than you will from what they tell you.
Generally you do not want the same menu item appearing in different menus. It adds complexity and clutter to the menu, and users will wonder if the two menu items are really the same or not. When it appears that a menu item belongs in two places, then you may have a more basic problem with your menu organization.
For instance, your example shows a menu bar that is organized by the class or attribute of the object the commands within act on. In general, the menu bar should be organized by category of action not type of object. For example, you could have a Retrieval menu for commands like Search and other means of displaying orders, and a Modify menu for processing the orders (e.g., updating, accepting, forwarding). Both menus would have menu items that apply to both types of objects, although some commands may apply to only one.
Organizing commands by object type is actually a good idea but it is better accomplished with a context menu (right click) than the menu bar.
I would try the search in both the Accepted Orders and Pending Orders menus. However, user testing will show if this is a good idea or not. But it also depends on your user base.
You are doing user testing right?
...you may already know this, but this is a good place to use the command\action pattern IMHO.
So to answer your question: IMO, yes, it is ok :) This situation is definitely warranted.
Just put it under both menus and have it open your search window, pre-configured for the order type who's menu it was launched from. Name them accordingly and voila they're actually two different actions - even though they use the same code/component.
Keep the user-selectable "status combo you can change" in the search window active though so the user still can adjust the settings without relaunching it from the other menu... and then perhaps rethink the structure, see some of the great answers in here for ideas ^^