Dynamics AX 2009 Database Diagram - dynamics-ax-2009

i'm desperately searching for dynamics ax 2009 database diagram.
Does anyone have it ?
Thank you

For a list of tables in AX 2009 see the MSDN documentation Dynamics AX 2009 Tables
To create a Visio UML Data Model diagram, place the tables you want to include in a project and use the Reverse Engineering Tool

With more than 2000 tables it would be a huge diagram.
If you look at the tables in the AOT most of them have relations set on them
You can also use the Cross-Reference
Select the table / table field you are interested in knowing more about.
Right click / Add-Ins / Cross-Reference / Used by
Here you will get a complete list of everywhere the selected object is used
The Cross reference must be updated ofcourse - prior to this.
It can be a long process - most the time it is only updated in Development and Test environments
You do that by clicking the main AOT node. Right click here and do like mentioned above. But select 'Update' instead of 'Used By'. It may take hours

Me too. Let us know if you find anything because I don't think it exists... In the meanwhile try this unfinished AX40datamodel.doc document to kick-start your knowledge.

Related

Extend an user definded table in SAP

I am trying to change a selection table within our packing list menu in our SAP system, but I do not know how to do this. My colleague, who is out indefinitely, has created a table of packing materials for our warehouse staff. This table is now to be extended by two further entries. The selection table is in the packing list table DLN7. I have also shown this in the picture attached.
So far I have checked all custom tables and custom windows. However, the table displayed is not there. Does anyone know where in SAP I have to look?
I would be very happy to receive further hints on this.
Thank you very much in advance.

How to Iterate SAP Table and click Each Row Using Excel with UIPATH?

I have sap table i want to iterate each row and click the row using Excel in UiPath.
How can i do that can anyone explain briefly.
the tables are differant but it looks same scenario.
Consider this free resource as your starting point for learning how to automate SAP: UiPath Academy
After you will finish the foundation course you may try SAP-specific course to learn how do solve your issue.
ok i think i can give you a good starter answer
note, i usually just convert all the data to datatable and work with that since i am more familiar, but you can use the excel activities and click features
1st. set the activity "excel application scope" that was my hangup forever, and seems to be that first thing that sticks everyone. it focuses all the other excel activities afterwards.
after this use a for each loop the following will show you how to do that easily
https://forum.uipath.com/t/loop-something-x-number-of-times/24667
then you can use read/write row/cell/column etc and use the 'i' integer counter to go through each row.
an alternative to the above is to set the excel scope, then read the entire sheet to a datatable, which i find easier if you are just doing database manipulation
also I also would like to double down on llya Kochetov's suggestions of the academy and also the UIPath forum which is incredibly helpful

Display Distinct values in LightSwitch browse screen

i have one browse screen which is fetching value from one entity(Attached to SQL datasource),
the entity will look like the below snapshot.
So in the browse screen its coming with all row values (1,2,3 and 4) even though i removed the Role field from the screen. I want to display the distinct Emp ID, Name, Age. Please give me some suggestion.
The question tags Lightswitch 2013 and 2012 so it's not clear what the OP is using. Views handling in Lightswitch before VS2013 Update 2 can be a little more challenging (particularly around the definition of key fields) so the other possibility is to use a WCF-RIA service to reshape the data. Having a WCF-RIA service ready to go always comes in handy eventually, even if there are annoying limitations and quirks there as well.
The exact steps depend slightly on what version of VS you are using:
The canonical article by Eric Erhardt - http://blogs.msdn.com/b/lightswitch/archive/2011/04/08/how-do-i-display-a-chart-built-on-aggregated-data-eric-erhardt.aspx
An up to date version for VS2013 - http://lightswitchhelpwebsite.com/Blog/tabid/61/EntryId/2226/Creating-a-WCF-RIA-Service-for-Visual-Studio-2013.aspx
Happy to help further with specific queries if you decide to go down the WCF-RIA route.
Phil
If you don't have the option of driving your Browse screen from your employee table I'd suggest creating a SQL view similar to the following: -
CREATE VIEW [dbo].[EmployeeView]
AS
SELECT DISTINCT
EmpId,
Name,
Age,
Role
FROM
dbo.YourTable
You can then attach to the view in LightSwitch and base the Browse screen on the attached view.
However, please bear in mind that you will only be able to view and not update the information as this type of view uses the DISTINCT clause.
The following blog post provides some basic details of using views in LightSwitch: -
Attaching to SQL Views

Very slow search of a simple entity relationship

We use CRM 4.0 at our institution and have no plans to upgrade presently as we've spend the last year and a half customising and extending the CRM to work with our processes.
A tiny part of model is a simply hierarchy, we have a group of learning rooms that has a one-to-many relationship with another entity that describes the courses available for that learning room.
Another entity has a list of all potential and enrolled students who have expressed an interest in whichever course.
That bit's all straightforward and works pretty well and is modelled into 3 custom entities.
Now, we've got an Admin application that reads the rooms and then wants to show the courses for that room, but only where there are enrolled students.
In SQL this is simplified to:
SELECT DISTINCT r.CourseName, r.OtherInformation
FROM Rooms r
INNER JOIN Students S
ON S.CourseId = r.CourseId
WHERE r.RoomId = #RoomId
And this indeed is very close to the eventual SQL that CRM generates.
We use a Crm QueryEntity, a Filter and a LinkEntity to represent this same structure.
The problem now is that the CRM normalizes the a customize entity into a Base Table which has the standard CRM entity data that all share, and then an ExtensionBase Table which has our customisations. To Give a flattened access to this, it creates a view that merges both tables.
This view is what is used by the Generated SQL.
Now the base tables have indices but the view doesn't.
The problem we have is that all we want to do is return Courses where the inner join is satisfied, it's enough to prove there are entries and CRM makes it SELECT DISTINCT, so we only get one item back for Room.
At first this worked perfectly well, but now we have thousands of queries, it takes well over 30 seconds and of course causes a timeout in anything but SMS.
I'm given to believe that we can create and alter indices on tables in CRM and that's not considered to be an unsupported modification; but what about Views ?
I know that if we alter an entity then its views are recreated, which would of course make us redo our indices when this happens.
Is there any way to hint to CRM4.0 that we want a specific index in place ?
Another source recommends that where you get problems like this, then it's best to bring data closer together, but this isn't something I'd feel comfortable in trying to engineer into our solution.
I had considered putting a new entity in that only has RoomId, CourseId and Enrolment Count in to it, but that smacks of being incredibly hacky too; After all, an index would resolve the need to duplicate this data and have some kind of trigger that updates the data after every student operation.
Lastly, whilst I know we're stuck on CRM4 at the moment, is this the kind of thing that we could expect to have resolved in CRM2011 ? It would certainly add more weight to the upgrading this 5 year old product argument.
Since views are "dynamic" (conceptually, their contents are generated on-the-fly from the base tables every time they are used), they typically can't be indexed. However, SQL Server does support something called an "indexed view". You need to create a unique clustered index on the view, and the query analyzer should be able to use it to speed up your join.
Someone asked a similar question here and I see no conclusive answer. The cited concerns from Microsoft are Referential Integrity (a non-issue here) and Upgrade complications. You mention the unsupported option of adding the view and managing it over upgrades and entity changes. That is an option, as unsupported and hackish as it is, it should work.
FetchXml does have aggregation but the query execution plans still uses the views: here is the SQL generated from a simple select count from incident:
'select
top 5000 COUNT(*) as "rowcount"
, MAX("__AggLimitExceededFlag__") as "__AggregateLimitExceeded__" from (select top 50001 case when ROW_NUMBER() over(order by (SELECT 1)) > 50000 then 1 else 0 end as "__AggLimitExceededFlag__" from Incident as "incident0" ...
I dont see a supported solution for your problem.
If you are building an outside admin app and you are hosting CRM 4 on-premise you could go directly to the database for your query bypassing the CRM API. Not supported but would allow you to solve the problem.
I'm going to add this as a potential answer although I don't believe its a sustainable or indeed valid long-term solution.
After analysing the indexes that CRM had defined automatically, I realised that selecting more information in my query would be enough to fulfil the column requirements of an Index and now the query runs in less then a second.

Problem refreshing tables in the LINQ to SQL designer

I have been using LINQ to SQL for a while, and there is one thing that has always bothered me. Whenever I modify the schema of a table, in order to refresh it in the designer, I have to delete it and then add it back. That's fine, but this means I have to actually find the table in the designer. I have about 100+ tables in my database, and every time I do this, it's like finding a needle in a haystack. Well, maybe it's not that bad, but seriously, it takes way longer than it should.
Is there another option for refreshing tables that I am unaware of?
Some people use SqlMetal to 'refresh/update' their Linq2Sql designer. The designer does not have support for refreshing the schema, when the DB changes. You have to manually drop the table and re-add it back in.
ADO Entity Framework i believe can refresh. I've not used it, but I think I saw this at a TechEd demo this year.
Helpful Info: Google's results for SqlMetal.
This is not possible using the VS linq to sql designer.
You can do this using LLBLGEN PRO, a third party tool, instead of the built-in linq to sql designer. It isn't free but it does do a ton of other stuff as well, which of course you may or may not need.
LLBLGEN PRO is actually a full set of ORM tools, but also includes an enhanced linq-to-sql designer with 'refresh model from SQL' functionality.
See here for description of the issue - http://weblogs.asp.net/fbouma/archive/2008/05/01/linq-to-sql-support-added-to-llblgen-pro.aspx
And here for the tool - http://www.llblgen.com/
I don't do any customization of the content on the designer so after table changes I just hit CTRL+A followed by DEL. Then shift-select all of my tables and slap them back onto the designer. I don't have 100s of tables yet so not sure if things slow down at some point but with 20+ tables it just takes a second.
I have written an add-in that can do that (in both directions; database -> DBML or DBML- -> SQL-DDL diff script).
Unlike SQLMetal (or EF's "update model from database") mentioned in another reply, the add-in does a true sync/refresh; applying changes corresponding only to the differences between the model and the underlying db.
That means any customizations (renamed properties/navigation properties etc) that you have made in other areas of your model will not be removed/overwritten unless they are in conflict with the underlying db schema. (in which case you can still preserve them by adding them to the add-in's "exclusion list")
You can download it and get a free 30-day trial license from http://www.huagati.com/dbmltools/
I have a similar comment, thought it might fit in here for anybody out there Googling a solution to this issue...
When I change the columns that are returned by a stored procedure, deleting the procedure from the designer and re-adding it does not work. The custom return type entity that the designer generates does not reflect the changes to the SP.
I've tried disconnecting the DB in the server explorer, even deleting and re-adding the connection.
The only solution I've found is this:
1. Delete the SP from the designer.
2. Save the dbml file (or the whole solution, whatever)
3. Completely close Visual Studio.
4. Re-open Visual Studio and your solution.
5. Re-add the stored procedure to the designer.
I think that qualifies as a blue ribbon pain in the rump.
Anybody got a simpler solution?
PS- To those of you with 100+ tables: Go get a real (real == mature) ORM tool. I personally vote for NetTiers. It rocks. Used it for years with no (or at least very few) complaints. You'll probably have to buy CodeSmith to use it effectively, but it's worth it. The templates are open source. And there are templates for nHibernate as well. But I've found that I don't really dig on Java ports. If I'm gonna code on MS platforms I want code that was "born" there...
...editorial complete. :P
I have had simliar issues with the designer - the best thing I can suggest is creating multiple contexts for different areas of your data access - I broke mine down to as few a related tables as I could get away with for each functional area. You can re-use tables across contexts so it isn't a big deal.
There's a template for VS 2008 that replaces the designer, it should ease refreshing your LINQtoSQL classes: http://damieng.com/blog/2008/09/14/linq-to-sql-template-for-visual-studio-2008
There are a couple of other options:
Edit the .dbml file that the designer uses to draw the tables and generate the code. I've used this approach when the changes are small (adding a couple of columns, creating a simple table)
Use sqlmetal to create the required xml for the changed tables and move the declarations by hand to the .dbml file. This one is better for when the changes are either more complex or larger.
I personally detest using the designer, and I've had various issues with it whenever I've dared to use it.
I mostly use LINQ for very simple CRUD (no linked entities or anything), and if that's the case with you, it might be worth straying from the designer crutch. Especially since defining LINQ-to-SQL entities is as easy as this:
[Table("dbo.my_table")]
public class MyTable
{
[Column("id", AutoSync = AutoSync.OnInsert, IsDbGenerated = true, IsPrimaryKey = true)]
public Int32 Id { get; set; }
[Column("name", DbType="NVarChar(50) NOT NULL")]
public String Name { get; set; }
}
This way, all your entities have their own files, which makes finding them much easier, though you'll still have to add/update the properties manually.
Of course, if you'd refactor 100+ tables, that might not be an option ;)

Resources