Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Sunday, June 12, 2016

Beyond the Mile High Menu Bar

Joel Spolsky discusses Bruce Tognazzini's "Mile High Memu Bar":
Designing for People Who Have Better Things To Do With Their Lives, Part Two - Joel on Software: "the mile high menu bar"

Tog invented the concept of the mile high menu bar to explain why the menu bar on the Macintosh, which is always glued to the top of the physical screen, is so much easier to use than menu bars on Windows, which appear inside each application window. When you want to point to the File menu on Windows, you have a target about half an inch wide and a quarter of an inch high to acquire. You must move and position the mouse fairly precisely in both the vertical and the horizontal dimensions.
But on a Macintosh, you can slam the mouse up to the top of the screen, without regard to how high you slam it, and it will stop at the physical edge of the screen - the correct vertical position for using the menu. So, effectively, you have a target that is still half an inch wide, but a mile high. Now you only need to worry about positioning the cursor horizontally, not vertically, so the task of clicking on a menu item is that much easier.
Based on this principle, Tog has a pop quiz: what are the five spots on the screen that are easiest to acquire (point to) with the mouse? The answer: all four corners of the screen (where you can literally slam the mouse over there in one fell swoop without any pointing at all), plus, the current position of the mouse, because it's already there.
Valid. Most self taught UI programmers eventually figure this out for themselves.

But it misses something.  (Some things.)


(1) What goes up (to the menu bar) must go down (back to the current position)

Often, usually, when you go to the menu bar at the top, you also have to GO BACK to where you were.  And that place is no longer one of the "five easy places".

One of the things I used to most love about using voice commands in conjunction with a pointing device (whether mouse, trackball, pen or touch) is that I could say "make pen red" without having to change the cursor position.   Voice has higher valency than menus and pointers - it is much easier to get to a large number of commands, like colors and pen widths using voice than it is using a pointing device.

Context menus, drawn at the current position, help.   But often, usually, the command that I want is not at the current context menu.  Joel does not like options or customization, but I would love to be able to customize my context menu with the things I do most often.   Or, rather, perhaps I would like to have my context menu CUSTOMIZED FOR ME, rather than me having to figure out how to do it myself.

(Why don't I use voice commands much any more?  Because they kept breaking with every new OS release. It was a pain to maintain them.  And because Linux systems don't have voice support worth a damn, certainly not comparable to Windows. Or, if Apple and Linux do have voice support, it is just yet another thing that I have to set up.)

Pen and touch computers, interestingly, make it easier to move to buttons all over the screen.  It is also easier to move back - but moving away and moving back is still a pain.

In the really old, old, days we sometimes had two pointing devices, two trackballs - or, equivalently, a switch that allowed a stationary relative pointing device to control two or more pointers.    So you could switch to "the menu pointer" choose a command, and then switch back to "the document pointer".

(Stationary like a trackball.   Can also work with a mouse, although not so nicely - although a mouse is relative, people often treat a portion of their mousepad as almost an absolute system.   Warping to menu and back is disconcerting if you are thinking as if your pointing device is absolute, or nearly so.)

I don't think non-power users would want such a multi-pointer setup.  Except possibly saying "Select menu", and then "Take me back".  


(2) Multiple Screens

For example, here is my current screen arrangement: I am currently typing near the bottom of the big 30" screen in the middle.



On the MacBook I am currently using, to get to the menubar at the top of the screen I have to move approximately 15" vertically. Using my trackball.

(By the way, I find using a trackball more accurate than using a mouse, unlike Joel.  I think because I use a big 2" trackball that I can spin with full forearm and wrist motions, or tickle with my fingertips, as opposed to the old thumb ball that Joel talks about. The original trackball, by the way, was a bowling ball.)

Even using my trackball, it is a pain to go from the bottom to the top of the screen.   But it is even worse to have to reposition into the middle to get back to where I was.

By the way, I have that funny arrangement of screens

(2a) because my screens are not all the same size: laptop (MacBook), the biggest display I was able to buy (30"), and my old, now second biggest, display (24") which is now a secondary display.

(2b) offset in that funky way so that I have more of those supposedly easy to get to corners that Tog talks about.   E.g. I have 3 corners in the big 30" monitor, 3 on the 24", but only 2 on the laptop.  There's no way to get 4 corners on the main display while keeping the 24" display in portrait mode.

By the way, I have often wished that the pointer could not cross from one screen to the other along the entire edge of the screen but instead, could only cross in certain places, leaving the other parts as "walls" that you can jam up against.



Similarly, such walls can also be inside a really large screen, giving you more "easy to jam up against" places that you can use for controls.



(3)  While we are at it: virtual reality displays may have no natural boundaries to jam up against.  No edges.  360 degrees, on all axes.  Synthetic user interface boundaries may be more necessary - or, getting rid of the edges as command points.















No comments: