Heuristic Evaluation: (with a little self promotion you may find useful at the end)
A Heuristic Evaluation is a usability method that helps to identify usability issues and gaps in the UI. It specifically involves examining the interface and judging its compliance with common usability patterns.
I usually follow the basic principles of The Nielson Norman Group, and then some.
Here are the basics with some comments/additions: (following taken from NNg)
Visibility of system status
The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
My best practice: Only surface what the user needs to know, don't overwhelm them with data. Progressive disclosures of information is key, especially in big data.
Use Natural Language as often as possible
The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
My best practice: I can't stress this enough. Porting back-end char-strings into the UI is a death knoll. Take the extra time to map these to natural language labels.
Non-Convuluted user exits
Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
My best practice: Ok, there could be 2 issues at play here.
1. (tangential issue) NEVER surface actions to users if they can never perform those actions. Don't pervasively disable them, it is confusing and frustrating. A solution to this is to have roles and the actions presented are role-based.
2. (2 facets) A. always alert a user when destructive actions have been chosen, and offer a "Cancel" exit. B. Design the UI so that the chances of a user selecting system actions bu mistake is extremely slim!
RULE OF THUMB: If a user CAN get themselves in trouble, they WILL! Some users panic and start clicking madly away to get out of wherever they got themselves in. That being said, your default action should ALWAYS be the non-destructive choice.
Consistency and standards
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
My best practice: Consistency is key to gain, and keep, user confidence in your product. It only takes one time to loose a user's confidence, and far longer for the user to gain it back.
Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
My best practice: Destructive action validation is really key, and I always alert a user if the action they are about to perform is going to delete data.
Recognition rather than recall
Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
My best practice: Especially in drill-down UIs, it is really key to path a user into, and out of, the UI, especially in big data. I accomplish this in a number of ways: breadcrumbs, tabs, in situ filter structures. Another key is to ensure your actions are logical, intuitive, and always in the same location.
Flexibility and efficiency of use
Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
My best practice: In enterprise software, frequently we have more technical roles/personas that have a whole user experience "Administration' section of screens in the UI that is accessed either in an admin subset/plugin, or is initiated on role choice at login.
Aesthetic and minimalist design
Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
My best practice: Yes, AND in natural language! :-)
Help users recognize, diagnose, and recover from errors
Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
My best practice: Add a link directly to any specific documentation anchors, and if a user needs to jump to a different menu or screen, there should be a link to that as well on the dialog. On occasion I will add a 'Read more' on some non-error dialogs.
Help and documentation
Even though it is better if the system can be used without documentation or UI help, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
My best practice: If you have to link to doc from within an application, make sure it takes the user to the directly relevant, contextually significant, information. Making a user have to then search through doc or a 'toc' is frustrating and, ultimately a complete waste of time, and it lessens a user's confidence in the product. When inserting inline help into a UI, try to construct the hover-help succinctly. At times I have inserted a 'read more' link within a hover, yes, it is a bit of a one-off of the control, but, in some instances, it has worked very well in testing and lessens the need for an overt 'help' icon inline that can clutter the UI needlessly.
When I do an Heuristic Evaluation. I slice the UI in a few different ways. The above topics are one group of reports. Other methods I add to the above are:
The easiest way to get the metrics you need for this evaluation is to huddle with your QA/Test teams. Likely they will have all of their test cases exactly in the way you need to evaluate each task, and, in evaluating based on these cases, allows ease of testing in the iterative Design Thinking model. Some examples of these are:
As a user, I can change my role.
As an Administrator, I can set authorization rules for both users and groups.
As a Supervisor, I can see the performance and velocity of my direct reports.
Bear in mind that, depending on the nature of your application, there can be hundreds of these. Don't panic, just document and evaluate one by one.
QA/Test are your BEST friends in the design process. They will uncover EVERY gap in usability and inconsistency in implementation. I have had the distinct pleasure to work with the most anal/detail-oriented QA and Test people in the business, and I am grateful for it!
Importance vs Residency vs Click Path Evaluation (patented)
This was a method I developed a few years ago, and though I am loathe to shamelessly self promote, I was generously awarded a patent in 2015 for this method.
The gist is that there are 6 main dimensions to a UI:
What is the element/grouping primary function? (select, define, view, report, run, view...)
Where is it on the screen?
How long is it visible on the screen
How much real estate does it use?
How often is it utilized by a user?
What is the click relationship to other elements on the page?
In this example, we see a wireframe of a product. An evaluator can easily and visually compare use, visibility, location, residency and click path. This methods allows for an easier and quicker evaluation of data areas.
In areas where use=visibility, the 2 shapes (diamond/circle) have parity, this MAY mean that the data area is likely revealing in a manner beneficial to user's needs. In areas where there is a great disparity, the evaluator can quickly determine either the importance of use vs residency based on progressive reveal of the data based on click path.
The arrows show the user's common click path for a task.
Dotted boxes show data areas that reveal on-click.
In areas where the red circle and the blue square have considerable non-parity, those areas will be a signal that you need to evaluate those deeper. While this disparity does not always mean there is a problem, eg. in the case of banner/title bar, it does show potential areas where innovation in workflow could improve usability. In this example, the Map panel comes immediately into focus as an area that may need investigating.
Non sequitur: Have you ever said a work repeatedly and it completely sounds foreign and odd? No? Just me? :-)