Simulate Human Interaction - performance

Good Morning Everyone,
This request came across my desk today, and I am not quite sure how to implement it.
Essentially, the requirement is to have a standalone computer that runs simulated user interactions with applications, network share drives, terminal server, etc. while providing performance metrics.
Long story short, it is desired to have a computer click on things without human interaction, and provide metrics such as response time, transfer rate, etc. The software should be able to monitor performance, and potentially even generate reports.
Has anyone heard of a solution that can provide these types of requirements?
I appreciate any insight you might be able to provide. Thanks in advance.

Related

Is real time collaboration possible using a text area?

I am developing an application which requires real time collaboration. I am planning to use a cshtml text area to allow the users to type. Is real time collaboration achievable using a text area?
Also, I have read a little about operational transformation. Can it be achieved using .net framework?
I am just a beginner and do not have much knowledge about algorithms that will help me achieve real time collaboration. Any help will be appreciated.
Thanking you in advance.
ShareJS is free, uses node.js to achieve what you are looking for, and implement a OT2 algorithm
For .Net there is no Operational Transformation out-of-the-box, however you can take a look to BeWeeBee SDK, (though is commercial software)
I am developing an application which requires real time collaboration. I am planning to use a cshtml text area to allow the users to type. Is real time collaboration achievable using a text area?
This really depends on the user experience you want to deliver. If you want to lock the textarea for one user whilst the other is editing then that might not be the nicest user experience but it's most definitely relatively easy to do.
If you want two or more users to be able to simultaneously edit the same text area then sending data_changed events between the users is reasonably easy using a realtime web technology but you'll need to handle the synchronisation of the textarea content between the users and handle collision detections. The user experience for this is also much more complex.
Also, I have read a little about operational transformation. Can it be achieved using .net framework?
I had to look up operational transformation and it partially answers the question about the user experience - it's non-blocking. Having skim-read the wiki doc I'd ask the question: why would it not be possible? You can communicate instantly between all users/application to notify them of changes (as stated: using a realtime-web technology) so you just need to implement and manage all the clever algorithmic stuff :) (I don't know if there's a component that will manage that for you)
For self hosted .NET realtime web technologies you might want to look at SignalR, XSockets, SuperWebSocket or WebSync.
If you want to get up and running a bit faster you might look at a hosted realtime web technology
This is an old question but there is some additional information that might be helpful. As previous answers mention, there are several options out there for text based data synchronization. Many of them based on Operational Transformation or CRDTs. These approaches are implemented in SDKs in many languages. (Full disclosure, I happen to be one of the authors of the Convergence).
However, you also need to take into account some of the other features required to implement collaborative editing. For example:
Presence: Who is there editing with you?
Collaboration Awareness: Things like shared cursors and selections?
Local vs. Group Undo: What happens when a user hits control-z? Are they undoing the last action they did, or the last action the other remote users did?
History: Knowing who did what is more complicated when multiple people are editing at the same time. When one user hits save (if there is a save) they may be saving actions performed by another user.
These are just a few examples of things to consider in collaborative editing beyond just data synchronization. When these questions come up, most answers focus solely on the data synchronization framework. At Convergence Labs, we help people work design collaborative editing applications and have implemented dozens of such apps. We have seen many times over that if all you put in is data synchronization, the user experience turns out to be pretty poor and users will not like the application.
So, in selecting a framework, look for something that helps you implement some of the other facets of real time editing, or at the least be prepared to implement them yourself on top of whatever tools you select.

Testing a wide variety of computers with a small company

I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate.
One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming.
I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing?
EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2?
Tom
The first question I would be asking is if you have any automated testing being done?
By this I mean mainly mean unit and integration testing. If not then I think you need to immediately look into unit testing, firstly as part of your build processes, and second via automated runs on servers. Even with a UI based application, it should be possible to find software that can automate the actions of a user and tell you when a test has failed.
Apart from the tests you as developers can think of, every time a user finds a bug, you should be able to create a test for that bug, reproduce it with the test, fix it, and then add the test to the automated tests. This way if that bug is ever re-introduced your automated tests will find it before the users do. Plus you have the confidence that your application has been tested for every known issue before the user sees it and without someone having to sit there for days or weeks manually trying to do it.
I believe logging application activity and error/exception details is the most useful strategy to communicate technical details about problems on the customer side. You can add a feature to automatically mail you logs or let the customer do it manually.
The question is, what exactly do you mean to test? Are you only interested in error-free operation or are you also concerned how the software is accepted at the customer side (usability)?
For technical errors, write a log and manually test in different scenarios in different OS installations. If you could add unit tests, it could also help. But I suppose the issue is that it works on your machine but doesn't work somewhere else.
You could also debug remotely by using IDE features like "Attach to remote process" etc. I'm not sure how to do it if you're not in the same office, likely you need to build a VPN.
If it's about usability, organize workshops. New people will be working with your application, and you will be video and audio recording them. Then analyze the problems they encountered in a team "after-flight" sessions. Talk to users, ask what they didn't like and act on it.
Theoretically, you could also built this activity logging into the application. You'll need to have a clear idea though what to log and how to interpret the data.

Web site performance tools?

YSlow, dynaTrace, HTTPWatch, Fiddler .........
All these things are really good for measuring the performance of the website and get statistics for the same. YSlow is really cool, offers good guidelines also.
However, i am very confused with so many things around (Though it's good that people already invested time and have made nice guidelines to follow and i thank them for great work done).
Following are my questions:
How much accuracy these tools have in terms or numbers they show ?
Which one(tool) is BEST to use (one for all needs)? Or i am missing name of some tool which is out of box and better than above all?
I'm suprised that you haven't mentioned JMeter. It is free, quite easy to use, has lots of features, and great for load testing your website.
As for question one, I'm not sure I can answer that. I'm sure that in general, the numbers these tools show are pretty accurate, but there are some catches. Take JMeter for example:
JMeter itself uses a lot of memory and also some substantial CPU time if you do some heavy load testing. That means that if you run the tool on the same machine as your website, some resources are lost, e.g. not available for the website
Testing it on the same machine does not out-of-the-box take in account that the data has to be sent over the internet connection, so response times are lower then the reality.
But in all, I think you should never blindly trust the results these tools give you, but they can give you a good insight into possible bottlenecks or problems.
YSlow is good to measure performance for a single user. Try to keep it grade A and it will be OK. But it actually doesn't measure performance in case of multiple concurrent users. For that you can use under each Apache JMeter. It's a good webserver/webapplication stresstest tool. So I would say, just use both YSlow (for client performance) and JMeter (for server performance).
I haven't used DynaTrace before, so I'll skip that part. The mentioned HTTP request trackers doesn't really measure performance, they are more debuggers.
As far as I am concerned, i find YSlow to be really good (have tried fiddler too) and it does help me when i need it and i do believe that it provides the correct figures thereby making me use that in the time ahead too unless there is anything unanimously accepted (which is difficult because everyone has different choices and requirements.) or even better. Oh they are right, forgot the JMeter, something you should definitely give a mention.
There is also Speed Tracer extension for Chrome. It should be usable with any JavaScript heavy website.
http://code.google.com/webtoolkit/speedtracer/
http://gtmetrix.com is a good tool and it is free. that analyzes your page's speed performance using Page Speed and YSlow

Web Application IPC/RPC with Client Applications

Background
I'm at the planning stages of a DIY project that'll help me automate some hardware at my house. It's probably also worthwhile to mention that I've got almost no experience with web-related development.
The Basics
http://img7.imageshack.us/img7/4706/drawingo.png -- I can't seem to embed the diagram.
In order to simplify management, I want to implement my UI in the browser.
The meat of my application will reside inside a Windows service or Linux daemon; this does not mean, however, that I'm after a cross-platform solution -- I'm not tied to any particular platform, so I'll pick one (probably based on the responses that I get) and stick with it.
I would prefer to use "free" tools (e.g., LAMP/WAMP), but it's not a deal breaker.
It would be nice to be able to communicate back to the user that some action is in progress (I think AJAX would be one way to go?)
Questions
The only thing that's not entirely clear to me is the implementation of step № 3. I'd like to hear possible implementation ideas (on Windows or Linux) as to how this should be done. Hopefully some of you can share how this sort of thing is done in the real world.
Miscellaneous
As always, if there's a problem with my thinking, please point it out!
There are many people better qualified to help with step 3 so I'll leave that to them.
My question is whether is you are looking forward to learning the mess of web technologies required for the front end or consider it a necessary evil on the way to what you really want to accomplish? If the latter (and assuming you are working in C/C++) consider taking a look at WT. It's a toolkit that makes the developing the web interface seem more like a desktop gui while handling much of the ugliness for you. It could potentially cut a lot of time off your development.

The pros and cons of "Shadow IT" in software development [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
Recently we’ve seen the emergence of so-called “Shadow IT” within many organisations. If you’re not already familiar with the term, it refers to those who manage to dodge the usual IT governance by means such as using thumb drives to share files or “unapproved” software products to achieve business tasks. Shadow IT can emerge from within technology groups but in many cases is sourced from non-tech areas such as the marketing or sales department.
What I’m really interested in is examples you have of Shadow IT within software development. Products like Excel and Access are often the culprits as their commonality means they’re easily accessible to the broader organisation. In many cases this is driven by someone who has just enough knowledge to make the software perform a business function but not quite enough to be aware of all the usual considerations required when building software for an enterprise.
What sort of cases of Shadow IT have you witnessed in the software development space? What processes have you seen unofficially addressed by this practice and just how important have these tools become? An example would be the use of a single Access database on a folder share becoming common practice for tracking promotions across the marketing department. Remember this cuts both ways; it can be extremely risky (lack of security, disaster recovery, etc) but it can result in innovation from a totally unexpected source.
Why does IT assume they should own and control all technology in the business?
The very fact that we have a name for technology that IT does not control (Shadow IT) suggests that we'd like IT to have control over all technology in an organization.
The only real reason I can think of for IT to have control is security (even then, I'd be very weary of trusting the most sensitive data to IT). Most other reasons given against business user-developed solutions are completely false. Take the reasons above: "software produced may not be well designed...", "the software may not be well supported...". Who are we kidding here? IT's track record on these fronts is simply not good enough to claim the high ground here.
Savvy business users solve their own information problems - they have been doing so long before IT existed. Anyone remember triplicate forms? Fax machines? Photocopiers? These things didn't need IT departments to govern them and they worked very well. If IT cannot solve the problem, or IT's track record has been sufficiently poor that business users have lost faith in IT, then business users will solve their own problems, using whatever means are available to them. Access, Excel, and shared drives are frequently used very successfully by business users. If IT is to stay relevant to an organization, it needs to support it's business users needs and deliver technology that people actually want to use, not just technology people use because they have to.
I have seen an organization where a multimillion dollar portal implementation promised to solve many business technology and information sharing problems. Years later, still not in production, business users gave up, and in despair developed their own solutions by outsourcing the development of a data centric web application. Guess what? It worked brilliantly and other departments are now bypassing IT and doing the same, on their own departmental budgets.
IT is a support organization for business users. This may offend some who believe IT's place to be somewhere alongside executive management in terms of its importance to the business, but IT has to deliver what the business needs, otherwise its just justifying its own existence.
The advantage is that users get exactly what they want and need, when they want and need it. Getting a request through a largish IT shop is a trying experience for a user. IT rarely has the business knowledge to let them give the business owners exactly what they are asking for, and when requests are denied or requirements amended, an explanation in plain English (or whatever language) is rarely forthcoming.
The disadvantages outweigh the benefits. Societe Generale lost billions due in part to "Shadow IT". It can cause support nightmares when an Access application, for example, becomes essential and outgrows the capabilities of the person who created it, or that person leaves. Even a poorly written Crystal Report can become so popular and widely used that it starts to drag down the database it is accessing when reporting times comes around. And if the person who wrote that report did not fully understand relational databases, it could produce bad data in some situations; data that causes bad business decisions to be made. Using a commercial (outsourced) application guarantees that the users will not get exactly what they want; there will always be compromises, and no explanation of why they were made.
The previous poster was right. Shadow IT exists because IT does not do its job well enough. There is not enough business knowledge, not enough responsiveness, and especially not enough communication. These things are why "Shadow IT" exists. The business owners paid for the machines, the admins, the dbas, and the programmers. It frustrates them when IT loses sight of that.
At the end of the day, the primary driver for most businesses is results i.e. making money. If the business sees that it can achieve the desired outputs necessary for the operation without spending thousands on software but through "shadow IT", then I can only see it being encouraged. I feel that that it is part of our job as developers to point out the pitfalls in operating in this fashion.
The pros of "shadow IT" could be
cost - less expensive
whilst the people writing the software may not be software experts, they are likely to be domain experts and have an intrinsic knowledge of how a piece of software should function.
depending on how the IT is organized, "shadow IT" may be able to respond faster to changes and business needs than the core IT can.
And the cons
software produced may not be well designed to be extensible, handle errors correctly a d all other aspects that come from experience in software development.
the software may not be well supported or, due to the way in which it has been produced, there may be no support at all.
Over time, the average person is becoming more IT savvy. Younger marketeers and finance people know that Excel and Access make them vastly more efficient. Working without them would make them feel handicapped.
I expect this trend to continue, and Corporate IT becoming more of an enabling organization. Where you make available data, help users troubleshoot their workflow, and limit them to a specific compartment for security.
What was called software development 10 years ago, will be everyman's tool 10 years from now!
There is no such thing. There are dinosaurs, and there are people who need to get work done.
If something like 'Shadow IT' happens, it is because 'Official IT' is not doing its job.
Software developers have hundreds of little and not so little applications they need to get their work done. The IT governance organisation should learn how to handle tens of updates a day, and switch to releasing daily (and patching a few times a day). Development has learned how to do that, they are next.
Sometimes I use Amazon EC2 and/or RDS when my company's resources are not enough or would take too long to provision. I pay for this out of my own pocket but do get to achieve my goals faster. All this without having to spend painful hours in meetings, trying to convince superiors or the SA-s that I really do need to do some thing or other.
In my mind, EC2 is the ultimate shadow IT. It's super easy to get going and provides me with the ultimate control.
Well, I suppose these things are everywhere. Not a big deal if it not threatens the company operation in any way.
Ya it's a big problem where I work. Architects and DBA's try to make a centralized system but these little "Shadow IT" departments make these small apps that have their own security or duplicated data... Personally, if I was the head of IT I would fire anyone who started such a project without IT support. Kinda harsh but it's important to keep the system healthy.
Most software developers have "unapproved" software on their computers. Just expect it. I'm not sure how much I have, but I'm sure I have dozens, if not hundreds of utilities that corp. IT has never even heard of on my work laptop.

Resources