machinespace

machinespace = the networked information space of ever-increasing complexity that humans have to interact with.

September 27, 2004

Simplify, don't “dumb-down”

As UI designers, we implicitly start with the assumption that all applications and products need to be simplified – is this necessarily true? What makes a product or application complex? It could be many things – the complexity of its workflows, number of information input sources, number of transaction, integration with other systems, security protocols needed, validation procedures, whatever…

In all the years I have been working in Human Factors, I have always looked for ways to make things easier for the "User". Ways to reduce the physical and cognitive burden by trying to reduce the number things to remember, reduce the number of interactions, reduce the need to think etc. We all “know” that the key to usable products and applications is simplification.

What if we are wrong? What if we are short-changing the user communities we are designing for? What if we simplify things to such a point that a user is helpless when faced with a problem? Can our efforts actually hinder the user of a product or application when faced with an “abnormal” situation?

Problem solving, decision making under uncertainty are both highly valued qualities, in any environment – business, military or civilian. When we surround people with applications and products that do not require them to reason things out, or to investigate different options, we may be undermining our user’s ability to form a good mental representation of the underlying processes and thus their ability to respond to “out of the ordinary” situations.

When we talk of simplification, what do we really mean? Does it mean bringing everything down to the lowest common denominator? What would it mean if we designed everything like that? Is simplification the same as “dumbing down”?
Dumbing things down does not make them usable, does it?

Does “over” - simplification really make things more usable? If a function or process is complex in nature, due to whatever reason – criticality, number of interactions, inter-related processes, time critical responses, etc – it is possible that they can be simplified only to a certain extent – beyond that, simplification may prove to be a detrimental factor because it conceals the true nature of the beast.

What if we are doing is actually deleterious to the user’s understanding of the system? Is there a way that they can rip off the cover to get at the innards? After all, we are not in a factory environment where the most efficient interaction is valued the most because it increases manufacturing production. We are usually looking to streamline workflows to increase user effectiveness, reduce errors and enhance the user’s experience with the system they are interacting with. A simplified system that leaves the user with a false sense of security because they are unable to view the “hidden” processes may prove disastrous in an emergency situation.

Usability cannot mean “dumbed down”, and the role of an UI designer is not to look for ways to dumb down the product interface and interactions to fit the so called lowest common denominator among their end-user groups.

How does dumbed-down design occur? There are 2 ways – the first is when a process or interaction is automated, and the user is relieved of “decision making” responsibility for a particular task – this reduces the burden on the user, ostensibly leaving them free to concentrate on more important tasks. However, some portions of the process may not be automated, leaving the User Interface with a vestigial screen or two which the user has to interact with, even though it serves no useful purpose.

Whatever interactions the designer chooses to leave behind, it is usually to preserve the user’s perception of process integrity, so they will have the feeling of going through all the steps they are used to. This implies that the user cannot cope with change or accept improvement in processes – basically, dumbing-down!

The second is more deliberate – an attempt to actually conceal the underlying processes through restricting the user’s access through a very limiting user interface, which provides just enough interaction and controls for the user to view and use only their portion of the process – strictly partitioned by role, and thus usually has a very vague idea of what happens in the process before their involvement, and what happens after they are done with their bit.

This sad state of affairs is more prevalent than you might think – look around you…. How many applications and web sites take the trouble to provide users a clear and simple map of their process? I don’t mean navigation aids like bread-crumbs or such – I mean usable information about what happens to the information you provide or transaction you execute.

There is - simplification by prioritization, simplification by streamlining of processes, simplification by masking, simplification through standardization, simplification through elimination, simplification by integration, simplification through decomposition (modularization) and simplification through automation….

I’m sure there are a few more ways, but what’s important is – all of them are valid, to a point. What’s more important - they are not mutually exclusive – a project teams can utilize all or some of the above methods of simplification in developing a valid solution.

Of all the methods of simplification, the most cosmetic is Simplification by masking – basically, throwing a sheet over the mess, so the user does not see the inner workings, if they do not need to interact with it on a regular basis. This approach is just as valid as the others, of course… and often, that is the only recourse that a designer has when told to make it “usable”. Although this approach can be utilized at any time in the development process, it has minimal impact if applied by itself, ignoring the rest of the “simplification” methods available.

Of course, the designer may not have the authority or skill to apply the other methods, it is important to know that these other methods are available. Project teams tend to discuss or utilize most of these simplification processes at some point or the other, but not necessarily in a systematic way –

If they are used properly, it would make the life of the UI designer that much easier, since he or she would be “skinning” a optimally designed application, i.e, one that has taken ALL system component needs into consideration, INCLUDING the human.

There is the fear, of course, of treating the user with indifference by reducing them to the level of a system component, but that is easily addressed by understanding that the human in the loop is the “control component” and thus at the top of the system hierarchy. This cannot be emphasized enough – but keeping this in mind, it should be possible to simplify without “dumbing down”.

_____________________________________
copyright 2004 ajoy muralidhar. all names, websites and brands referenced are the copyright or trademark of their respective owners.

September 13, 2004

Defying the Precautionary Principle


What is the Precautionary Principle?

If we look it up on Google, we find literally thousands of sites explaining the concept. Not that it's something new or revolutionary - remember the old saws advising you to "look before you leap" or "better safe than sorry" (or my personal favorite, "don't run with scissors").?

Originally, the Precautionary Principle was drafted to address response to Environmental issues, where long term effects are unknown, and decisions are made in an atmosphere of uncertainty. However, the guidelines are applicable to a wide range of human activity, especially in the industrial/business world. For a better definition, please refer to Wikipedia.
http://en.wikipedia.org/wiki/Precautionary_principle.

In the current business environment, where Organizations are forced develop detailed plans for contingency management in order to ensure their continued existence, the Precautionary Principle has many advocates, especially among those who define Risk in terms of anything that has the potential to disrupt normal Organizational activities.

This broad application gives rise to a very large "gray area" where we find the Precautionary Principle being applied to every situation that has uncertainty, even those that constitute the "normal" risks of doing business. Applying the Precautionary Principle strictly leads to ultra-conservatism, and an unwillingness to either participate or support any action that is viewed as being "risky".

Essentially, the principle of precautionary action has 4 parts:

1. People have a duty to take anticipatory action to prevent harm.

2. The burden of proof of harmlessness of a new technology, process, activity, or chemical lies with the proponents, not with the general public.

3. Before using a new technology, process, or chemical, or starting a new activity, people have an obligation to examine "a full range of alternatives" including the alternative of doing nothing.

4. Decisions applying the precautionary principle must be "open, informed, and democratic" and "must include affected parties."

What does this mean? What the Precautionary Principle is advocating is - PLAY IT SAFE.

In other words:

No breaking the rules - standards and processes are sacred.

No pushing the envelope - establish "nominal" values and make sure that everything falls within acceptable range.

No newfangled technologies - nothing but tried and true stuff.

No explorations - don't even think of going off and trying out anything that isnt in the books or expressly endorsed by Organizational policy.

No quick direction changes - exhaust all options available before changing direction, even if it means wasting time going down unfruitful avenues. Better to be sure we aren't missing anything.

Make decisions by Committee - consensus is all important.


While Conservatism is welcome, we should not confuse pusillanimity with discipline. An organization that has a clear sense of what it needs, and follows a carefully considered path to achieve its goals is conservative and disciplined in its approach. An organization that wavers endlessly because of its leaderships inability to make decisions or commit to a course of action is just plain pusillanimous.

Ergo - no innovation is encouraged. Just tried and true stuff, solid and dependable. And oh, yes.. Stick to what's known, follow the rules, adhere to the standards, and make sure you run everything by the everyone else. Also known as designing by committee.

Everyone loves the Precautionary Principle. Governments, Militaries, Corporations, Universities, Organizations, you name it… No one ever got fired for following the maxims advocated by the PP. Honestly, it is a wonder that creativity and innovation exist at all.

However, if you think about it... it makes sense, in a way. The entities that are most likely to follow the Precautionary Principle are the ones that have the least history of innovative thinking. Creativity abounds in Individuals and Entities that defy the Precautionary Principle, so the most innovative ideas come from these small islands of rebels - but they do not always have the proper access to funding and other resources, they are likely to be bright flashes that shine briefly and die out.

Organizations know this - most Organizations will tell everyone who will listen that they love innovation and creative thinking, but they will also acknowledge that they cannot afford to run the financial risk of committing the whole Enterprise to defying the Precautionary Principle.. They have obligations to their shareholders, customers and employees and to the community as a whole.

Once in a while, an Organization that wishes to stand out, blow its competition away, or at least make a run on it's competitors market share will be willing to get out on a limb and make decisions that defy the Precautionary Principle - they usually do this by going out and hiring an outside consultant who has a reputation for being "with it" or " in tune" with whatever market the Organization is trying to tap into.

Even here, they are forced to apply due diligence to ensure that the "specialist" team picked can work within the framework of the Organizational constraints, and that they will be able to deliver a workable solution within the timeframe and budget dictated by Business requirements.

The conservatism is understandable - there is simply too much to lose. But on the other hand, we cannot even begin to fathom what there is to be gained. Of course, for an organization that has has a specific role, and will not seek to look outside its market boundaries, defying the Precautionary Principle may bring impovements in productivity, but these may be offset by the costs associated with the risks assumed by the organization.

So, where does this leave us? Are organizations doomed to look outside of themselves for true innovation? Is this the fate of all Organizations that attain a particular size, or commit themselves to public scrutiny?

I mentioned Google earlier in this article… Google is an organization that is seen as creative and innovative, but they are now a publicly held company, and as such, responsible to their shareholders now. Gone are the days of free-thinking design radicalism - they will have to adopt and apply the Precautionary Principle like anyone else. Not that the company's leadership wants to do so, they are forced by market forces to comply with the the Principle. How far they want to conform is still up to them, of course.

There are lots of Organizations that may not define risk quite so conservatively, and have a system of prioritization that allows them to assign specific costs associated with adopting or choosing not to adopt a particular course of action… this prioritization, when done in a fair manner, allows for a lot of flexibility to evaluate new processes, technologies and even philosophies in a controlled manner, while still managing the potential for catastrophic failure or loss within the Organizations acceptable limits.

The ideal environment in which to "defy" the Precautionary Principle would be in a large company with the depth of resources, both financial and people, but one that is not continually subject to public scrutiny for every decision it makes. Such a company would have forward-thinking leadership, and a good sense of where it needs to be at different points in the future with respect to its competitors and the marketplace in general.

Companies that have a track record of innovation usually also have taken pains to set expectations from their customers, and the marketplace, that their forays into unknown territory are absolutely necessary to continue their current successes. Due to the setting of expectations, Risk in such a company will arise from NOT undertaking research into the unknown.

I am sure some will ask "But what about Wall Street? What about Analyst expectations"??

Personally, I am not entirely convinced that analyst expectations are a bad thing. Expectations for continued growth in revenues it deter a company from taking unwonted risks, but may also spur the leadership to look for ways to increase revenue by exploiting hitherto untapped channels.span>
_____________________________________
copyright 2004 ajoy muralidhar. all names, websites and brands referenced are the copyright/trademark of their respective owners.

September 01, 2004

Uncheck that Box...

and step away very slowly, sir...

I mean that figuratively, of course. 'Checking the box' implies a user interacting with a system UI, making a selection that will influence the outcome. However, the user is also a component of the same system, and subject to the same kind of constraints as the rest of the system.

The human in the system (user) can be regarded as an autonomous component, and although subject to business, social and legal conditions, potentially has more freedom of action than the other non-human, non-autonomus components of the system which are controlled by very strict application logic.

That "box", then, can also figuratively represent the system, bundled with the obvious choices, those that have been made for you by the System Designer. We shall consider the system as a box within which all the components are interacting towards some goal.

All systems exist for some purpose, and the goal of a system is to accept inputs and produce outputs to fulfill the purpose of it's creator. This "purpose" could be the fulfillment of a business or personal need, or anything else, it does not matter. What matters is that it is a complete system in itself, and it encloses other systems, and is in turn enclosed by larger systems.

What happens if the box is not checked? is the user, who is a system component rejecting the will of the Designer? Is the Design supposed to suppress and control the components of the system, or simply just show the most efficient and productive way to achieve the system's (and thus, the Designer's) goals?

After all, the user is supposed to have a choice, does he not? A good designer does NOT design a black box, but rather creates a design that allows for the user within the system to rip off the "casing" of the interface, and allowing them to bypass the normal channels, as long as they are still working towards the system goals.

In reality, athe appearance of having a selection to choose from, we are providing the illusion of free choice, but they are already limited by the Design... and the User Interface as the "casing" for the design, displays only the parameters, controls and input choices that the UI designer feels is suitable for the user.

So, even if the System Designer has designed a system that is robust enough to accept inputs that are not strictly controlled, the nature of the casing designed by the UI designer based on his/her understanding and interpretation of the User Needs has the ability to restrict the users actions.

Let us assume that each possible choice made by the user represents a potential path through the system, each arriving at a predetermined goal. Then, what happens if a user within a system chooses to follow a path that is NOT represented? Should they be able to? Should he or she be able to create their own path through the system? What would happen if they chose not to follow any of the choices laid out before them? Are they to be accommodated or denied?

Lets see - logic states that if an user is not willing to pick from one of the "available" choices, he or she should be informed that the system cannot continue to process whatever they were processing/requesting and gracefully terminate the process by presenting the user with the option to quit or to try again.

If the user was able to manipulate the controls and enter their own values, chances are that the system would either freeze and die, or return a ugly error message mumbling something about values being outside of acceptable parameters. OK so far...

Now let's consider for the moment that it were possible for an user to choose his or her own path through the system, completely unrestrained - or to reject the system even, by choosing not to make ANY choice, even that of creating a new path through the system... and suppose we allowed the system to permit the "uncontrolled" values. It would try parsing the information out and trying to make sense of it, and if the values were within range, would return something that may make sense to the user. Or perhaps not.

A qualifier here - rejection of the system by the user only implies that they will look for other ways to accomplish the systems goal, since it is the user's goal as well. All that "rejection" implies here is the availability of OTHER paths possible through the system in question, that are viable alternatives to the Designer provided paths.

If a system component were to be truly without constraints, then its behaviour would be unpredictable, and thus irrational. The system's integrity would be rapidly degraded, and it would tend to destroy itself, if there were no interventions by the system itself, or an external agency to isolate and neutralize the rogue component.

But.. what if the system was robust enough to accept the user's rejection of "acceptable values" and accept whatever value was input? What would happen? Would the user be more effective? Chances are that most users would not know what to do next, but perhaps there is one more intrepid then the rest who is willing to play around and experiment.

What would this user need in order to utilize this system ability effectively? I submit that no matter what information the user were provided with by the Designer from WITHIN the system, it would be of limited use, since the User's knowledge of the universe outside the system is constrained by their frame of reference being the the system they exist in.

Thus, even if users have NO constraints within the system they are in, they are limited by the fact that they are a component of the same system, and have little or no knowledge of the outside world. What is needed is a framework of reference external to the current system.

There we have it then - each system with all its component systems and users is the proverbial box - every thing within is constrained by the fact that it exists within the same set of constraining (operational) parameters, and try as hard as we might, there is not much we can do to force a system to operate outside of its operating parameters - at least not without rebuilding it or breaking it in the attempt.

What of the external frame of reference then? Who provides it? By my reasoning so far, The only way to permit complete autonomy is to operate in a "boxless" environment - this implies complete freedom from constraints - which is not possible in the physical world. That is because all systems tend to be nested within each other, if we equate each system with a box, we can make the case for boxes within boxes ad infinitum.

However, we can approximate such an unconstrained system, by hypothesizing that the "next box" ie the external environment is much bigger than the previous one - big enough that the constraints are not obvious. However, the larger system chosen as the external reference framework should not be so big that the user cannot understand or navigate it.

Thus, "thinking outside of the box" is not just desirable - it is essential to the understanding of any system. To truly understand a system, a user has to be outside of it. No system can be completely understood by its components - if the user is WITHIN the system, then no matter what we do from the point of UI design, workflow streamlining, information architecture, training etc., can change the situation if all the information we provide comes from within the system the user is immersed in.

As Designers - here, I don't differentiate between the UI Designer, System Designer or any other kind, we have the unique ability to "step" in and out of the various systems we are designing, ie, we constantly step in and out of boxes we are designing.

It's true that we ourselves are subject to the constraints of our own "boxes" (the systems that we belong to). But we CAN provide the Users of the systems we are designing with the "external framework of reference" that is so necessary for them to have a better understanding of their system. They do not see what we see, and conversely, they may be able to see events and interactions within their system from a perspective that we cannot.

Our Design efforts can give the user within the system an improved framework of reference, albeit skewed by the refracting lens of their constrained perspective. A user within a system may be able to grasp all the interactions beween all the components within his own system, but he or she will never be able to perfectly grasp the context of their system's relations to other systems, both bigger and smaller.

If a Designers were to completely immerse themselves within the user's system, at the user's level, then they too, become trapped within the same framework of reference as the users, and thus will be doing them a disservice by providing a design that keeps them from exploring the full potential of their system.

Thus, it behooves the Designer to explore the system he or she is designing from the perspective of all the constituent components at multiple levels - and balance it with information from outside of the system they are in. Only then will they be able to provide the user with the framework of reference and knowledge that they need to understand their system completely, and thus find ways to make it most efficient in realizing their goals.

_____________________________________
copyright 2004 ajoy muralidhar. all names, websites and brands referenced are the copyright/trademark of their respective owners.