Count the number of rows from Remote Desktop Connection Navision - uipath

Image from the Nav form:
.
Hello,
How would be the proper way to count
the rows in Navision form from Remote Desktop Connection.
UiPath recognize the RDP screen as image.
But the hard part is because there is something like more than 50 rows most of the time.
Every day different total rows.
They can be 10, 20 or more than 200.
Do you have any idea where I should start?

You may consider using UiPath Remote Runtime to access RDP session natively, without the need to work with images. If you could install it then it's a simple data scraping excersise.
If your security considerations do not allow that your best best is using AI Computer Vision which will recognize the screen content.
If that does not work for some reason you're in the land of OCR.
Here UiPath could help you with IntelligentOCR and extractors but I am afraid that the only Flexicapture extractor does exactly what you need at the moment and it would require Abbyy Flexicapture license.
Alternative approach would include working with Navision via API or extracting the table into Excel (it is usully allowed)

Related

ODBC-Performance Access 2010 and SQL Server 2008 R2

I have an Access 2010 application. Access is only the Frontend. The Backend is a SQL-Server 2008. The connection between, is ODBC. The ODBC driver is „SQL Server“ (Version 6.01.7601.17514).
I Access is a table with over 500.000 rows. Every row has 58 columns. So the performance is very, very slow, at the most time. To search for one column is not possible, Access is freezing.
I know, that’s not a new Problem...
Now my questions:
Is the driver ok? Because, when I create an ODBC-Connection local (Windows 8), I can choose also the driver „SQL Server“. But here is the version 6.03.9600.17415.
Is there a difference between the speed? I've got a feeling, that, when I use the Acc local under Win8 with the newer driver, it is faster than Terminal Server and older driver.
Local under Win8 I can also choose the driver „SQL Server Native Client 10.0“ (Version 2009.100.1600.01). What ist he difference between those „Win8-ODBC-Drivers“? Which driver would you use and why?
What is with a newer SQL Server? For example 2014 vs 2008. Is 2014 faster than 2008 with ODBC?
What is about the Server-Hardware? When I use a SSD instead oft he HDD? Make a SSD the ODBC-Connection faster?
All users are working on the Terminal Servers. Main with Office 2010, but also with proAlpha (ERP-System). And also with the Access. Now one user told me, that sometimes, if not many users on the TS‘, Access is much faster. What do you mean? When take one TS and work on it, only with Access, not with other application. Is then the ODBC faster?
What can I try else?
Thank you very much.
I have noticed some performance improvements with SQL Server Native Client 10.0 also using Sql Server 2008 with Access 2010, over the original Native Client.
I would question why you need search/load all 500,000 rows of your table. Assuming this is in a form, it sounds a bit like poor form design. All your forms should only load the records you are interested in, not all records by default. In fact it's considered reasonably good practice to not load any records on form load, until you know what the user is looking for.
58 Columns also sounds a little excessive - are there memo (varchar(Max)) fields included in these columns? These should probably be moved into a separate table. examine your data structure and see if you have normalised it correctly.
Are your fields indexed correctly? If you are searching on them an index will considerably improve performance.
Creating views on sql server that only return a suitable subset of records, that can then be linked as tables within Access can also have performance benefits.
A table with 500,000 rows is small – even for Access. Any search you do should give results in WELL UNDER 1 SECOND!
The best way to approach this is to ask a 90 year old lady at a bus stop the following question:
When you use an instant teller machine does it make sense to download EVERY account and THEN ask the user for the account number? Even 90 year old ladies at bus stops will tell you it would be FAR better to ASK for the account number and then download 1 record!
And when you use Google, you don’t download the WHOLE internet and THEN ask the user what to search for. Or do you create one huge massive web page that you then say use ctrl+f to search that huge browser page.
So think about how near all software works. That software does not download and prepare all the data local and THEN ask you what you want to look for. You do the reverse!
So the simple solution here is to ask the user BEFORE you start pulling data from the server. Build a form that looks like this:
Then, to match the search (say on LastName), you use this code in
after update of the text box.
Dim strSQL As String
strSQL = "select * from tblCustomers where LastName like '" & Me.txtLastName & "*'"
Me.RecordSource = strSQL
That way the form ONLY pulls the data you require – this approach even with 10 million rows will run INSTANT on your computer. The above uses a "*" so only the first few chars of the LastName need be typed in. The result is a form of "choices" You can then jump or edit the one record by clicking on the "glasses" button in above. That simply launches + opens one detail form. the Code is:
docmd.OpenForm "frmCustomer",,,"id = " & me!id
Let’s address a few more of your questions:
Is there a difference between the speed? (ODBC drivers)
No, there really no difference in the driver’s performance wise – they all perform about the same and users likely will never see or notice the difference in performance when using different drivers.
For example 2014 vs 2008. Is 2014 faster than 2008 with ODBC?
Not usually. I mean think of ANY experience you have with computers (unless you are new to computers?). Every time you upgrade to new Word, or new Accounting program, that program is larger, takes longer to load, uses more memory, uses more disk space, and near always uses more processing. So given the last 30 years of desktop computers, in almost EVERY case, the next newer version of software requires more ram, more disk, more processing and thus runs slower than the previous version of that software (I willing to be that is YOUR knowledge and experience – so newer versions tend not to run faster – there are a few “rare” exceptions in computer history, but later versions of any software tends to require more computer resources and not less.
Now one user told me, that sometimes, if not many users on the TS‘, Access is much faster. What do you mean?
The above has nothing to do with ODBC drivers. In the above context when you are using Terminal Server, the both the database application and the front end (Access) are running on the same computer/server. What this means is that data transfer from the server to the application is BLISTERING fast and occurs not at network speed, but at computer speed (since both database and application are running on the SAME server). You could install Access on each computer, and then have Access pull data OVER the network from the server to the client workstation – this is slow since there is a network. With TS then the application and server run very fast without a network in-between. The massive processing and speed of the application and server can work together – once the data is pulled and the screen rendered, then ONLY the screen data comes down the network wire. The result is thus FAR FASTER than running Access on each workstation.
that sometimes, if not many users on the TS‘,
Correct, since the users application is running on the server, then no network exists between the application and the SQL server. However since each user has their application running on the server (as opposed to each workstation computer), then more load and resources are required on the server. If many users are using the server, then the server now has a big workload since the server has to run both SQL server and also allocate memory and processing for each copy of Access running on that server.
A traditional setup means that Access runs on each computer. So the memory and CPU to run Access occurs on each workstation – the server does not have to supply Access with CPU and memory, the server ONLY runs SQL server and facilities requests for data from each workstation. However because networks are FAR slower then processing data on one computer, then your bottle neck is not processing, but the VERY limited network speed. Since both Access and SQL and all processing is occurring on the server, then it is far easier to overload the resources and capacity of that server. However the speed of the network is usually the slowest link in computer setups. Since all processing and data crunching occurs server side, only the RESULTING screens and display is sent down the network wire. If the computer software has to process 1 million rows of data, and then display ONE total result, then only 1 total result comes down the network wire that is displayed. If you run Access local on each workstation and process 1 million rows, then 1 million rows of data must come down he network pipe (however, you can modify your Access design to have SQL server to FIRST process the data before it comes down the network pipe to avoid this issue. However with TS since Access is not running on your computer, then you don’t worry about network traffic – but you MUST STILL worry about how much data Access grabs from SQL server – thus the above tips about ONLY loading the data you require into a form. So don’t load huge data sets into the Access form, but simply ask the user BEFORE you start pulling that data from SQL server.

Performance of a java application rendering video files

I have a java/j2ee application deployed in tomcat container on a windows server. The application is a training portal where the training files such as pdf/ppt/flash/mp4 files are read from a share path. When the user clicks a training link, the associated file from the share folder is read downloaded from the share path to the client machine and start running.
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
Is there anything in the application level, we need to configure? or it is a configuration for load in the server? or is it something needs attention from a WAN settings?
I am unable to find out a solution for this issue?
Please post your thoughts.
I'm not 100% sure because there is not so much details, but I'm 90% sure that the application code is not the main problem.
For example:
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
A PDF is basically just a string. Flash is a client-side technology. And I'm pretty sure that you just send a stream to a video player in order to play a MP4. So the server is supposed to just take the file and send it. The server is probably not the source of your problem if we assume that he can handle the number of requests.
So it's about your network: it is too slow for some reasons. And it's difficult to be more specific without more details.
Regards.

Analyzing User Agent strings

I've recently inherited a large codebase for an online product. One of the things I'm trying to determine is what are the most predominant clients people are using to access the product online (e.g. Browsers, Mobile Devices, IPads, etc...). On the up side I have a database table with all the User Agent strings (about 10 million records). Can anyone recommend a tool that can analyze and summarize the User Agent data?
Note: Keep in mind, these are not IIS Logs. This is a table of just User Agents strings that were captured using a variety of other processes.
Never heard about such kind of software. I may suggest the following option.
You can export this table (with user agent log) to some text file which can reproduce IIS or Apache log structure (simulate) and then feed this log file to standard web log analyzing tool.

What's the most relevant server-push technique for my web application project?

I will try first to explain my web application goal.
It is dedicated to an intranet and the architecture will consist in a server connected to the web and less than 10 clients.
The application will be used to give aeronautical information. This will be achieved by retrieving with cURL requests (php scripts) launched every X minutes (CRON jobs) on remote sites (meteorology,airways and airport information) and saved in an XML file or a DB. The information gathered is then presented on a web page(a kind of well-organised synthesis) to air trafic controllers to enhance their situation awareness.
As data gathered must reach the client in real-time, I cannot rely on browser interaction : if an airport is closing due to bad weather,that piece of information has to be displayed as soon as possible without any user interaction.
The number of airport monitored will be around 30 (thus giving you an idea of the server load, knowing that meteorology reports are stored on the X website, airport data on the Y website etc..) .
After reading a lot on Reverse Ajax (Server-Push), I really need a professional experience to help me choose the best approach to develop this application.
The Server-Push technologies I discovered on the net are:
1) APE (Ajax Push Engine) -> This one makes me feel like trying to open a door with a bazooka (can handle thousands of connections).
2) Long polling (Comet) -> I fear this one could put to much stress on the server load.
3) Web sockets -> I must first wait for it to get mature and supported by firefox 6 (no more security issues)
As I am completely new to server-push, I hope you will help me find the appropriate way to achieve displaying these data in a close to real-time manner. It would certainly be a pity if I ended up setting "refresh" buttons to update the air pressure at the location of airport "A" using Ajax.
Thanks for reading.
If you speak C, there is an example of how to do pushing here. This 3-calls API is much simpler than others so it might match your relatively simple needs.

Document Conversion Realtime - Implementation Questions

We have a need to convert MS Office documents to PDF real time when someone provides a link to a document after checking whether user is authorized to view the document or not for an intranet portal. We also need to cache the documents based on the last modified date of the document, we should not convert the document again if another user requests the same document and the document content is not modified since it was last converted.
I have some basic questions on how we can implement this - and would like to check if anyone has previous experience or thoughts how they see this implemented?
For example, if we choose J2EE as the technology, and choose one of the open source Java libraries for PDF conversion; I have following questions.
If there is a 100 MB document - we would need to download entire document from the system where the document is hosted before we start converting the document. This approach may have major concerns on the response time given that this needs to be real time viewing. Is there an option to read first page of a document without downloading entire document so that we can convert document page by page?
How can we cache a document? I do not think we can either store the document in server or database. The reason is this could lead to anyone who is having access to either database or server - can access document content. Any thoughts?
Or do you suggest any out of the box product to do this instead of custom development?
Thanks
I work for a company that creates a product that does exactly what you are trying to do using Java / .NET Web service calls, so let me see if I can answer your questions without bias.
The whole document will need to be downloaded as it will need to be interpreted before PDF Conversion (e.g. for page numbering purposes) can take place. I am sure you are just giving an example, but 100MB is very large for an MS-Office document, although we do see it from time to time.
You can implement caching based on your exact security requirements. If you don't want to store the converted files in a (secured) DB or file system then perhaps you want to store them on a different server behind a firewall. Depending on the number of documents and size you anticipate you may want to cache them in memory. I am sure there are many J2EE caching libraries available, I know there are plenty in .NET. Just keep the most frequently requested documents in your cache.
Depending on your budget you may go for an out of the box product (hint hint :-). I know there are free libraries available for Java that leverage Open Office, but you get the same formatting limitations when opening MS-Office Files in OO. Be careful when trying to do your own MS-Office integration / automation. It is possible to make it reliable and scalable (we did), but it takes a long time and a lot of work.
I hope this helps.

Resources