The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Tuesday, December 22, 2009


Will markets like IPXI allow inventors to be more independent of big companies? Will they promote innovation?

in reference to: IPXI (view on Google Sidewiki)


Will markets like IPXI allow inventors to be more or less independent of big companies?

in reference to: IPXI (view on Google Sidewiki)

Download My SideWiki

I would like the ability to save all of my sidewiki entries, on all pages anywhere.

I would like the ability to save sidewiki entries from others that I have tagged.

I would like my sidewiki to be integrated like a bookmark server like del.icio.us

in reference to: Google Sidewiki (view on Google Sidewiki)

Monday, December 07, 2009

People are using personal computers at work

I have long maintained "the craftsman owns his tools", and that, since the computer is my main tool, I should therefore own the computer I work on.   Especially since IT departments so often provide lousy computers to employees.

This Inquirer article says "People are using personal computers at work". Around 1 in 10, and growing.  Cost savings of 9 to 40%. IT departments resisting.  A growing trend, perhaps instigated by the recession, but, I hope, becoming more prevalent.

For years I used my own personal computers.

At Intel, one of the earliest personally owned computers I used was my Compaq Concerto pen computer circa 1994-2000 - which I bought because I wanted to prove that speech recognition was better than handwriting recognition, and which I fell in love with.  A succession of tablet PCs.

At AMD, when I started writing the K10 proposal I went out in the morning to buy a Wacom tablet at Frye's.  Drawing on this was so frustrating that at lunch I went out and bought a Toshiba Portege 3505. I wanted to connect it to external video, but the cables stuck out awkwardly into my lab, and there was no software screen rotation - so I took the external monitor and flipped it over, using the styrofoam packing cases to support it upside down.

Unfortunately, when I returned to Intel in 2004 IT had clamped down.  No personal computers allowed to be connected to the Intel network.  Company provided laptops only.  I wasn't allowed to use a tablet PC.  No matter that drawing was part of my job.  Tablet PCs are only for managers. Ditto Blackberries or any other device that allowed you to look at your calendar without having to wait 15-30 minutes to reboot.  No matter that 10 years earlier we all carried pagers that werepinged for meetings - once IT took over, no personal equipment, and technology regressed.   Frustration at this is one of the reasons I left Intel - although I will give Doug Carmean of Larrabee credit, he was much more willing to push IT and get his engineers the equipment they needed.  For this reason, if no other, I hope that Doug and Larrabee overcome the adversity announced today.

My current employer's IT department tries to clamp down.  I use a company provided notebook PC because I must keep my personal work and my company work strictly separated - to protect MY rights as an inventor in my spare time.  But, I have 5 LCD displays at work, 3 which, with their 3 USB display adapters.  Plus keyboards.   IT insisted on buying me a trackball, and I have been told that IV will purchase displays and adapters and keyboards such as I want.  But, purchasing my own allows me to experiment - to try to find the combination of equipment that makes me most productive.

After all, my productivity is my business.  It influences my success at work, and how much I enjoy my work   IT is often motivated by cost reduction as often as they are motivated by productivity.

This is not specific to the IT departments of my past or current employer.  It seems to be true of all IT departments.  I have seen exceptions only (a) during the P6 project, when we essentially had project specific, non-corporate, computing support, and (b) at University.  Perhaps also (c) at the Little Software House on the Prairie.

I admit an ulterior motive: I hope not to be an employee forever. I want to be independent.  Perhaps consulting for my current employer.  Perhaps something else.

But, whatever: I don't want to have to deal with productivity limiting IT departments.

The PC Revolution happened in large part because individuals could purchase PCs outside the control of corporate IT departments.  But the corporate IT departments caught up, and regained control.

Speech vs pen vs keyboard vs ALPs pad

Last Friday I posted to Facebook about how happy I was with my end of week Seattle to Portland commute.  I was only person in the van, so I plugged in my headset mike and started dictating via Vista's built-in speech recognition. Sweet!  Most productive 4 hour van ride I have yet had.

This Monday morning's commute there are 2 other passengers. Sleeping.  I don't want to wake them, so I have been reading USEnet news and Gmail using my pen. Handwriting replies, but mainly point and click.  Useable, but doesn't feel as productive.

Typing this on keyboard.  Not as nice as speech.  Errors caused by bouncing of van.  Mainly due to ALP pad: van bounces, hand brushes pad, typing goes astray.  I think that I will have to disable ALP pad while in van, get another pointing device.  Wish I had an IBM style pointing stick. Q: what other non-ALP pointing devices are there, that can be used in a bouncing van?

Bottom line:  for a bouncing van (also, bouncing small commuter planes):

Speech wins, if not annoying others

Pen/handwriting okay

Keyboard maybe

ALPs pad loses

Wednesday, December 02, 2009

Thought during in my morning bicycle ride: BO and Date of Urbanization and Transportation

Thought during in my morning bicycle ride:

It is well known that Americans seem to be more sensitive to body odor than many other countries. As I was having my shower after my bicycle ride, I wondered if this might be due to the fact that European countries started commuting into big cities earlier than in the United States. The United States was quite an agrarian nation even up until the first world war. When the United States urbanized, public transportation on buses and streetcars and trains was quite common, and eventually the United States became the automobile civilization. Whereas Europe urbanized earlier, so people were commuting by walking or bicycling etc. which meant that the European worker class, and even the European office bureaucrat class, got to work a little bit smellier, unless they had a shower. Which wasn't common until the end of the 20th century. I think about Andy Grove saying "there will never be showers at Intel", and I wonder if this was perhaps a Czech attitude.

The United States has many more humid areas in the American South than most European countries do.  This might also explain. Especially since being sweaty and hence smelly was indicative of class.

Tuesday, December 01, 2009

Links to MLP, Coherent Threading, Multistar

Urgh. Let me just add some links to the blog, from my Google docs "website" root:

Other Stuff

* MLP Yes! ILP No!
o presentation I gave at ASPLOS 98 WACI session
o preserved for more than 10 years by the session organizer at http://www.cs.berkeley.edu/~kubitron/asplos98/final.html,
+ specifically http://www.cs.berkeley.edu/~kubitron/asplos98/slides/andrew_glew.pdf
o a copy kept on Google docs: http://docs.google.com/fileview?id=F.cb345d6b-c4ac-40c6-9e71-bf5d4d18af55
+ it is unclear if Google docs allows anyone to read this -i.e. it is unclear if one can "publish" to the world an uploaded presentation

* Multistar:
o The Story Behind Multistar: http://docs.google.com/View?id=dcxddbtr_40czbtrtf2
o Multistar PDF (2004): http://docs.google.com/fileview?id=0B5qTWL2s3LcQZDIyZDVmN2EtYjY4MC00YjU2LWE4ZGMtYzk2MmU4M2U2NDQ5&hl=en

* Berkeley ParLab talk on Coherent Threading: (2009)
o Coherent Threading
Coherent Vector Lane Threading (SIMT, DIMT, NIMT)
Microarchitectures Intermediate Between SISD and MIMD, Scalar and SIMD parallel vector
o http://docs.google.com/fileview?id=0B5qTWL2s3LcQNGE3NWI4NzQtNTBhNS00YjgyLTljZGMtNTA0YjJmMGIzNDEw&hl=en

The Story Behind Multistar

The Story Behind Multistar


I've been exploring ideas for large out-of-order machines, such as MultiClusterMultiThreading (MCMT), Multilevel Instruction Windows, and Multilevel Branch Predictors, for years - actually since before I joined Intel for P6 (which was a single level OOO machine), and especially after P6, when I attended the University of Wisconsin, at which I did NOT get my PhD, and did NOT get anything published, especially NOT multilevel branch prediction, but where I gelled my MCMT ideas.

I took these ideas to Intel when I returned in 2000, and then to AMD in 2002.  Of course, I took only my UWisc ideas that were public to AMD, nothing from Intel.  I can't talk about what I did at either Intel or AMD, and it probably won't ever see the light of day. I am happy to see that AMD has announced that Bulldozer in 2011 will be an MCMT machine, even though they switched my definition of cores and clusters around. Even if AMD patents my ideas, they probably won't give me credit.

But anyway...  I left AMD in June 2004, and rejoined Intel in August 2004.  In between was one of the few periods in my career when my work was not immediately assigned to an employer like Intel or AMD.

So, I spent the summer surf kayaking at Oceanside, Oregon.  And, at the last minute, writing up these "MultiStar" ideas.   My goals were three-fold: 

(1) As usual, I just plain love computer architecture.

(2) I wanted to have something that I could start work on immediately if I decided to quit Intel and finish my Ph.D.   (The biggest pain about working at AMD was that I left behind 10 years of ideas that I had created at Intel, that I could not use at AMD.)

(3) Lastly, the idea of getting patents outside of a big company was attractive.  I have almost 100 patents through my employer; why not a few on my own?  Heck, if only I had patented the aspects of the P6 microarchitecture I invented at UIUC in my MSEE, such as my form of register renaming, HaRRM (Hardware Register Renaming Mechanism)...

So I wrote up MultiStar.  I had to go beyond any microarchitecture I had done at Intel or AMD.  I did not use any ideas that belonged to Intel or AMD.  I could only use ideas that were already public, or which I created new and fresh in the summer of 2004.  I had to invent new ways to do things that I had already invented once or twice before, at Intel or AMD.  I had to leave a few parts of the machine unfinished, because I had not invented new ways to replace what I had done earlier.

I called it "MultiStar" because I arbitrarily decided to make it an out-of-order microarchitecture with multi-level everything.  Multilevel branch prediction, I$ (easy), decodxactly the same documener, microcode, renamer, scheduler, register file, instruction window, retirement, datacache.  Multiple clusters. Everything.  I don't necessarily recommend multilevel everything as a way to build a machine, but, surprisingly, the ideas fit together remarkably well.  I think it could be built.

I was especially happy that I invented new ways of building a multilevel instruction scheduler and register file / operand bypass mechanism - solving problems that I had been trying to solve for years at Intel and AMD.  This solution acheives the sort of pleasing elegance that makes you feel confident you have it right. The time and place I invented this sticks in my memory (above the waves in Oceanside), like the time and place I invented the form of register renaming used in P6 (UIUC, Hwu's classroom, winter, pipes banging), and the time and place I invented Intel MMX (driving back from Princeton with Bob Dreyer, after the i750 was cancelled).

I wrote up multistar.  Emailed copies to Hwu and Patt, and a few others.  Joined Intel, disclosing multistar, all umpteen pages of it, as the "Intellectual Property Preceding Employment".

And, oh, yes, assigned multistar to an invention company to apply for patents. Using exactly the same disclosure as I provided to Intel. You can see the patent applications at the USPTO website, since they become public a short while after application.  Unfortunately, I was not able to work on the the patent applications after I rejoined Intel, since I did not want to risk contaminating them.

At the time, I thought that multistar was more than 10 years ahead of what Intel or AMD would consider building. Time will tell.


By the way:  I am quite pissed off by all of the people who say that single-threaded CPU microarchitecture has run into a power wall.  Yes, power is hard, and yes, performance does not go up linearly with number of devices.  Performance only seems to go up as the square root of the number of devices, so-called Pollack's Law.  But performance still goes up.  And power need not be linear in the number of devices.

As I am wont to say, the square root of an exponential is still an exponential.

I am slightly pissed off that saying this seems to put me in the camp of single thread OOO microarchitecture bigots.  I've been working on multiprocessor and multithreaded microarchitectures for years, again since undergraduate. Sure, I like using them to build SpMT, but I am also quite eager to use them to build mulithread systems. If you have parallel workloads.  And I have ideas how to make writing parallel code easier.  I like working on exascale supercomputer architectures with millions of processors and billions of threads.  I have been a loud advocate of highly parallel GPU-style SIMT Coherent Threaded microarchitectures.

I am NOT just a single thread OOO bigot.  I know how to make BOTH single threads and multiple threads run faster.

Single thread OOO microarchitecture ran into the power wall because Willamette was a stupid microarchitecture.  Emphasizing high frequency because it was a marketing gimmick, and because the Willamette microarchitects were not confident about how to build more advanced OOO. Single thread OOO microarchitecture ran into the power wall because the guys building Nhm were weaned on Willamette. And because Intel and AMD became reluctant to do anything that was not incremental.

Willamette had some good ideas.  Even replay, the cause of so much instability, can be used effectively, e.g. with transitive cancellation to prevent replay tornadoes.  But Willamette gave them such a bad reputation that ideas like replay may not be looked at again for 10 years.  (It's already been almost five.)

Actually, multistar is really quite incremental.  It applies a well known technique, multiple layers, to several microarchitecture datastructures.  Working out the details of how to do so is not necessarily obvious.

Multistar was one of the best ways I knew of the build large OOO machines in 2004, building on ideas in the public domain, plus a few weeks of new ideas.  It isn't even the best way I know how, although it does have some ideas that were new at the time. 

Of course, my ideas continue to evolve.

Minor updates to my Google Docs website

Added multistar microarchitecture thoughts from 2004 (that are NOT owned by Intel or AMD).

Added linked to my presentation on Coherent Threading GPU architectures, given at UC Berkeley ParLab in August 2009.

Monday, November 23, 2009

I love USB Display Adapters!

I love displays! I love looking at large numbers of pixels, relatively large pixels for my aging eyes.

When encountering the various fanboyisms of my friends and coworkers - gaming, netbooks - I have often felt somewhat embarassed, since I'm really not that much into games, not that much into PCs of clamshell formfactor. I am somewhat into tablet PCs and handhelds, but those are expensive enough that I cannot really exercise my enthusiasm.

But LCD displays have come down in price. And USB display adapters have made it feasible to attach many displays to my laptops - I still haven't bought a desktop system in more than 10 years.

Best of all, I can almost, *almost*, act as if my love of large display surfaces is work related. It sure does help to be able to look at really, really, wide spreadsheets (although really, really, wide spreadsheets are a bit of an abomination). It helps to be able to read papers, or patents, in PDF a full page at a time.

Yesterday and today I went a bit overboard. It's been a while building. Confronted with aforementioned really wide spreadsheets, I went and bought a second 1680x1050 monitor for use at work, matched to the company provided monitor. (After asking IT, who said that I could only have two monitors if they were smaller, 1400x1050. Which rather misses the point.)

Since I wanted to work at home in Hillsboro, as well as at work in Bellevue, I bought a second LCD monitor at home. But this was 1900x1200. Do you realize how much more you can see on a 1900x1200 monitor? Almost didn't need to stretch the spreadsheet across both monitors. Since I have no docking station, I used a Tritton SEE2 Xtreme USB display adapter. Which works fine, and which allowed me to have not just two, but three displays: the two 1900x1200 external monitors, and my laptops' LCD.

It's a slippery slope. Last week I almost went out and bought 2 more monitors for use at work. Instead, I decided to drive my two 1900x1200 monitors from Hillsboro to Bellevue, carefully wrapped in sleeping bags and clothes. So now, on my big Biomorph desk at work (another piece of personal equipment) I have 5 monitors: two 1900x1200 in landscape mode, and the two 1050x1680, in portrait mode. Plus the laptop LCD display.

I originally set these up with 2 different USB display adapters: the old Tritton SEE2 Xtreme, and a new Diamond USB Pro, bought last night on my way to Bellevue. This gave me 4 monitors, in combination with the two DVI ports on my Dell docking station. But there were issues: in particular, Windows restricted me to 16 bit color on one of the displays. Plus, I had forgotten an AC cable.

First trip back to Frye's: bought the power cable. And another Diamond USB Pro. Now all works... Except that the Tritton monitor keeps misbehaving, occasionally hanging. So I make another trip back to Frye's. Now I have 3 Diamond USB Pros, 2x1900x1200 + 2x1050x1680 + the laptop LCD. The laptop resolution is reduced, to 1200x800, but I can't really complain.

Let's see, that's 8.88 megapixels, if I have done my math correctly. Most of it driven by USB. Probably no good for video or games, but good enough to throw a lot of data up where I can look at it.

More! I want more! More slow pixels! If I could plug in e-paper displays all about my office, I would.

We're on the verge of LCDs and e-paper being cheap enough to replace the whiteboards that are ubiquitous in offices. Nice thing, this is a continuous acceptance curve: it's not so quantized as many other application areas are.

Eventually, we must get rid of refresh.

Wednesday, October 14, 2009

Travelling, with Power Supplies

Packing for a short trip - 2 days and nights on business, 2 back with my family in Portland, and then back to Bellevue.

A significant portion of my luggage is taken up by power supplies.

What I want to take: 2 power supplies, 1 for each of my laptop PCs, work and personal; 2 smaller power supplies, 1 for each of my phones, work and personal; 1 big supply for my CPAP medical device.

I have separate work/personal PCs/phones because I still have to go to great lengths to separate work stuff from personal stuff. I wish I only had one.

I'll eliminate one of the phone chargers, at the cost of not bing able to charge them simultaneously.

I love the idea of "universal" chargers like the iGo, so much so that I have two, and probably need to buy a third for my new devices. They aren't as universal as one might hope. Unless one adapter can plug in 2 PCs and 2 phones simultaneously, you have the "Gotta remember to swap chargers" problem. Many such universal chargers can charge a cell phone and a PC simultaneously.

I wish that I could also squeeze a power bar or squid into my carry-on.

Sunday, September 27, 2009

Why IV? Dinosaurs!

My wife biffs me upside the head for taking so long (circa 5 years) to work up the nerve to leave Intel and join IV:

    Boring cubicles  
    An office with dinosaurs from Jurassic Park

Wednesday, September 09, 2009


I have left Intel effective yesterday, and will join Quantum Intellectual Property services on Monday.

Saturday, September 05, 2009

"My" Photograph

"My" Photograph

Many websites, in particular social networking sites, suggest that you provide a photograph.

Here's the one I use.

Why is this "My" Photograph?

Anecdote: a coworker, Matt Merton, sent an email postcard back from a trip to Italy with the bust of a Roman Emperor, asking "What computer architect does this remind you of?"

The likeness was remarkable. Curly hair, beard.

Since I have always been a fan of Roman history, in particular of the "good emperors" such as Hadrian and Marcus Aurelius, I decoded to adopt this as "My" Photograph, for use on websites, etc.

I may have lost Matt's original email, but the photograph credited below seems reasonable.

Unfortunately for me, the photograph was not of Marcus Aurelius or Hadrian, or even Trajan, but instead was of Antoninus Pius, about whom I know little.  The adjective "Pius" does not really apply to me.  But what little is known of this emperor makes him seem like a good guy, although some blame him for making Rome too comfortable, softening Rome up for the barbarian invasions that plagued his successors.

Source, Attribution, and Credits for "My" Photograph

This photograph is derived from the photograph


which contains the following licence information:

This file is licensed under Creative Commons Attribution 2.5 License

In short: you are free to distribute and modify the file as long as you attribute its author(s) or licensor(s).

Picture shot by Marie-Lan Nguyen (user:Jastrow) and released under the license(s) stated above. You are free to use it for any purpose as long as you credit me and follow the terms of the license.

Example:  © Marie-Lan Nguyen / Wikimedia Commons

I am using images, such as JPG, GIF, or the like, of this photograph as "me" on various sites, for reasons explained here.

Unfortunately, it is occasionally hard to attach the attribution and credit to the image where used.  For example, many sites only allow images, JPG or GIF, to be attached, with nowhere to put credits. I would like to associate a link or flyover with the image, however, unfortunately, many sites do not allow this - or else I do not know how to do so.  I made some attempt to embed the credits within the photograph, but by photo editing skills are not so good.

I am therefore creating this page describing `"My" Photograph', and providing the attribution and credit here. I will try to link to this page, or to the page http://docs.google.com/View?id=dcxddbtr_22f8pwcsd4 which describes other social networking data, and which links to this page, in an effort to provide full credit.

Note that I attempted to contact the photographer using the email address provided, to no avail.

Making lemonade with my Toshiba M400 Tablet PC

I have blogged in the past about my Toshiba M400 Tablet PC. I so much loved my older Toshiba 3505 Tablet PC, but the M400 is just plain and simply a lemon. It has sat unused for the almost two years since I bought it, because it was simply too loud to be used at home.

Not just my PC, but apparently the whole M400 model line. Googling finds many people with my complaint: fan too loud, hangs. One fellow even went so far as to void his warranty by installing an on/off switch for the fan.

Several sites discussed how the fan noise was a bigger issue in Japan than in the United States. Apparently the Japanese are less tolerant of unnecessary noise than are Americans. Yet another thing I have in common with Japan.

Having a bit of free time, I decided to try to make a go of it again. Reinstalled from scratch. Tried using various utilities to tune CPU performance and fan speed: RM Clock, Notebook Hardware Control, and, most usefully, SpeedFan. However, although the latter accomplished some tunings, they were insufficient to make it tolerable.

"When life gives you a lemon, make lemonade..."

The M400 is too loud to use at home. However, we have a cottage near the ocean, filled with the roar of the surf. The M400 is, barely, tolerable here. The M400 is louder than the surf, especially when it goes into what I call "Hyper Fan" mode; but it may be okay to leave the computer at the cottage. Heaven knows I am getting no other good use out of it.

Annoyingly, the M400 is the first machine I bought a multi year warranty for. Expires next year. If there was any hope of it being fixed. But past attempts were vain.

Wednesday, August 05, 2009

"Website" Root

Andy Glew @ Google Docs

This is Andy Glew's "website" at Google Docs.


Its main purpose is to hold my curriculum vita - a very verbose approximation to my resume, approximately 11 pages. Also a much shorter resume, approximately 1 page.

This site was updated Dec 27, 2008. As of this date the most up-to-date versions of my resume and CV may be found on Google Docs:

Other Stuff

AutoHotKey scripts: http://www.autohotkey.net/~glew/

Lots of posts on comp.arch.  Many excerpted for USEnet Nuggets in CAN, Computer Architecture News.

History of Andy Glew Websites

This Google Docs "site" replaces an earlier Geocities website: http://www.geocities.com/andrew_f_glew.

rather sad: Google Docs has a better editor, but Yahoo GeoCities allows
me to have more meaningful URLs. Google Docs cannot really be said to
be a website at all, since the URLs are gobbledygook.

You may find other, stale, websites for Andy Glew
scattered around the web, often at schools I attended where my old web
pages live, but where I can no longer log on. E.g. UWisc. These stale
pages may have some useful data not represented here, since intervening
websites such as Geocities were more restrictive, in disk space and
convenience, than a university website, but the resumes are out of

Similarly, Andy used to work at Intel MRL, where
there was Andy's MRL bio. MRL's web site was never actually editable by
Andy, only by the MRL webmaster.

For an example of a web based system that is easier to maintain, look at Ward Cunningham's wiki, e.g. Andy's wiki home page.

My LinkedIn pages provide some information, but are limited in size and, hence, detail.

See my social networking doc page for information about my presence on various social networking tools. In particular, see the "My" Photograph page for credits for the photograph I use on many such pages.

More AutoHotKey stuff

I just uploaded http://groups.google.com/group/lifehacker-coders/web/Glew-AHK-stuff.tgz
which contains a tree menu system, that can be navigated entirely by
keybaord, that contains a whole slew of windows commands (basically,
the simple commands in AutoHotKey help), plus a few "keyboard modes"
to allow m,ouse motion, window motion, window sizing, etc. by
keyboard. Plus the "maximize to half screen", "lower right screen",

Posting such stuff to the Lifehacker / AutoHotkey mailing list. Probably also www.autohotkey.net

Monday, August 03, 2009

AutoHotKey Swap Delete and Backspace

I found that the standard AutoHotkey technique to swap BS and Del


did not work for me, because I need to turn it on and off quite a lot as I move around.

So, I wrote the following.

Would post it to some AutoHotKey sharing site, but can't seem to access such at the moment.

; AutoHotkey Version:
; Language: English
; Platform: Win9x/NT/XP
; Author: Andy Glew: andy.glew@intel.com, ag-ahk@patten-glew.net
; Script Function:
; Swap Delete and Backspace
; with a menu to enable/disable.
; Why?: because I am constantly docking and undocking my laptop
; when undocked I don't need to swap Del and BS
; but when docked at work, with my Happy Hacker keyboard, I do.
; And I found that killing the script was a bit too annoying.


#InstallKeybdHook ; unconditionally installing the keyboard hook.
; TBD: may not be necessary
; TBD: may waste memory and slow down system

#SingleInstance force ; replace any already running instance of this script

#NoEnv ; Recommended for performance and compatibility with future AutoHotkey releases.
SendMode Input ; Recommended for new scripts due to its superior speed and reliability.
SetWorkingDir %A_ScriptDir% ; Ensures a consistent starting directory.

; Autoexec'ed
Menu,DeleteBackspace,Add,Swap backspace and delete,ToggleBsDelSwapSetting
Menu,DeleteBackspace,Add,#!Del/BS - this menu to enable/disable,ToggleBsDelSwapSetting
SwapBackspaceAndDelete = 0

Menu,DeleteBackspace,ToggleCheck,Swap backspace and delete
SwapBackspaceAndDelete := SwapBackspaceAndDelete ^ 1


; #-Win, !-Alt, ^-Ctl, +-shift
; # right Win, etc.
; * - ignore other modifiers
; $ - prevent self-recursion (e.g. {Del} can send {Del}


; On the Happy Hacker keyboard, I need to swap Delete and Backspace

; TBD: I wish that I could create a function to do such swapping, but I seem to be unable to in AutoHotKey

;MsgBox Delete %SwapBackspaceAndDelete%
if ( SwapBackspaceAndDelete ) {
SendPlay, {BackSpace}
else {
SendPlay, {Delete}

;MsgBox BackSpace %SwapBackspaceAndDelete%
if ( SwapBackspaceAndDelete ) {
SendPlay, {Delete}
else {
SendPlay, {BackSpace}

;MsgBox NumPadDel %SwapBackspaceAndDelete%
if ( SwapBackspaceAndDelete ) {
SendPlay, {BackSpace}
else {
SendPlay, NumPadDel}


Sunday, August 02, 2009

Dragon Speech Recognition

I have just installed Dragon speech recognition. I used to use Dragon speech recognition more than 10 years ago, but gave up, not because of speech recognition quality, but because it annoyed cubicle desk neighbors, and mainly because it became a pain to have to reinstall on every new computer.

I have opened this blog entry to record a few impressions as I start using Dragon again after this decade of nonuse.

I think that I needed to enter in more training this time around about 12 minutes of reading Arthur C. Clarke.

I thought that under the old Dragon I was able to turn the microphone on and off by voice. Or, rather, I think there were three microphone modes: (A.) completely off can't be turned on or off at all by voice; (B.) completely on, used for all commands; and (C.) a mode where the microphone was actually on but where all commands except something like "turn the microphone on" were ignored. The last mode was convenient for disabling accidental voice commands.
Ahhh... there is such a mode: "go to sleep" or "stop listening."

The Dragon bar has something called a select and say indicator, that is supposed to indicate whether the application you are connected to supports all of Dragon's functionality. I find it rather amusing that the application bar is yellow when connected to Dragon naturally speaking tutorial. I.e. the tutorial program doesn't support Dragon completely.

It is a bit sad that Firefox's text window, such as I am speaking this blog entry into, inspires Dragon to say "dictating to a nonstandard window". I wonder if this will be enough to force me to use Internet Explorer for my blog entries.

It is already evident that speech recognition, once again, allows me to be more verbose than when I was typing. Since I am already pretty verbose even when I am typing, this may be considered a downside. On the other hand, it looks like Dragon's spelling is better than my spelling when I am typing.

Dragons undo and redo capabilities seem somewhat restricted. Particularly when you are typing in words like "undo" and "redo". I think the biggest annoyance that I've had with speech recognition so far is accidentally undoing.

Microsoft's habit of automatically changing the focus to Windows as they pop up just caused me a problem: I started speaking into a pop up window.

Dragon's correction occasionally has problems: I just tried to correct "cause" into "caused", and Dragon recognized the correction correctly, but somehow got the insertion of the correction incorrect, resulting in "ccaused".

Thursday, July 30, 2009

2 vertical displays - sweet!

I've been using 2 displays, my laptop's built-in LCD and an external display, for a few years.

A month or so ago, my laptop was changed to one supporting a widescreen format. Within the last few weeks I rotated my external monitor to portrait mode, allowing me to see full page PDFs. However, this gave me an irregular arrangement of display surfaces: my landscape mode laptop, next to my portrait mode external display. It was quite easy to lose the cursor here. Fortunately, tools such as Nvidia's control panel, which allow the mouse to skip over non-visible display surface, and AutoHotKey, helped.

On Tuesday EW showed me his setup: 2 external displays driven by the same laptop. I had not realized that the HP laptops can drive two external displays, one using VGA, the other using DVI. Unfortunately, the laptop LCD display must be disabled.

Yesterday I started using this: driving 2 external displays, both rotated into portrait mode (actually, inverse b\portrait, a 90 degree clockwise (right) rotation. This is very nice. It is a huge win to be able to have 2 "full page", 8.5" x 11", windows side by side, on the different monitors. More relevant: 60 lines x 75 columns, in my usual program editing font in emacs (lucidasanstypewriter-bold-18). 96x114 in a slightl;y smaller funt (lucidasanstypewriter-12). 133x165 in a small font (7x13). This is very nice, but... I find that I want wider displays, without giving up the vertical span. 60 lines is nice, but 75 columns is narrower than some of my programs.

Over the years: I started coding on PDP-11, then CDC and IBM punched cards and line printers with wide paper, 132, 112, 80 and/or 72 columns, depending. Some coding on machinees with 40 column displays; fortunately, that did not last long. Many years where the BKM was to limit code to 80 columns, or 72. In the last few years I admit that I have relaxed my standards, and started writing wider code.
It can be surprisingly more readable not to have to split a line of code up over multiple lines. I think because some ^%%^&%&%$^%$ APIs require lots of parameters. Splitting an important, complicated, IF statement over multiple lines is good, because it may be important code. But splitting a function call of minor importance up over multiple lines, not because it is important, but because it has a lot of parameters because of an ill designed API, is bad. The size, the visibility, the vertical span of a section of code should correspond to its importance, not to its verbosity.
(I have long advocated colorization and formatting, both in program editors and in the programming languages themselves. I advocate next generation programming languages that are XML, not just ascii formatted into XHTML. I.e. where the XML indicates semantic constructs. I like folding editors. I can imagine changing font sizxe according to the importance of code. Although how to deduce the importance of code?)

An unanticipated downside, evident as I watch a video webcast: I have had to place my laptop PC, with the speaker, off to the side. It is disconcerting to have the sound come from a different place than the video - and with large monitors, the distance increases. In the past, I have dissed monitors with speakers built in - now I understand. My current monitors do not have speakers built-in, but clip-ons can be purchased.

2 vertical displays next to each other is much more regular than my old configuration. Nevertheless, I needed a bit more hacking with AutoHotKey, to reduce the amount of wrist pain inducing mousing. I tried tweaking mouse ballistics, but eventually found an old trackball. Being able to roll the ball from edge to edge is wonderfull. I've learned, over the years, precisely how hard to roll the ball so that it runs halfway across, etc. However, it is worth noting: large displays => pointer devices need tuming and improvements.

GUI, desktop and window management is a bit lacking.
I wasn't able to persuade Windows to treat the pair of monitors as a single large display. I have seen this option in the past, but can't find it again.
Nvidia's NVIEW desktop manager has been helpful. Allowing the task bar to be spread across the two different displays is a help. Ensuring that dialog boxes come up on the current display, as opposite to a default area far, far, away. Buttons to move a window to a different display.
Actually, not having Windiows treat it as a single display has been helpful. If Windows treated it as a single display, maximize would expand to cover both displays. As it is, I find that maximizing to cover one of the displays, half the total area, much more useful. Nvidia NVIEW provides the "Maximize to entire desktop (both displays)". I need to write a few AutoHotKey scripts to maximize to 1/4 of the desktop (the upper half of one display), etc.
I am noticing this with only two displays side-by-side in portrait mode. Some of my friends have 3 or 4 such displays (they work at companies/gropups that invest in programmer productivity). Googling, one finds that many multi-monitor and very large monitor folk are reviving what amounts to the old tiled window manager systems.
It is a little bit odd that I only noticed this yesterday and today, when I switched to 2 side by side portrait mode displays. I have been using multiple displays for years, but mainly in landscape mode, one above the other, or side by side. I wonder why I have only noticed these issues now? Perhaps it is that my new display configuration lends itself to large vertical windows. I have read that the human visual system is much more senstive to horizontal information rather than verical.
Overall: large displays need reworking of the user interface.
I suppose I knew this already, given my long term interest in blackboard scale displays. It's just different, when it is staring me in the face.

Monday, July 27, 2009

Error Handling

I've been thinking a lot about the examples Andrei Alexandrescu used in his talks about the D programming language.

Briefly, Andrei pointed out how the "Hello World", simplest possible program examples, in nearly all popular programming language books, are not correct, or at leastr are not correct if an error occurs. They do not check for the return value of functions like printf, or they do not arrange for the OS to see an appropriate error code if the program fails.

Andrei argues that a programming language should make it as easy as possible to write correct code, complete with error handling.

The D programming language has constructs such as scope(exit/success/failure) and the now ubiquitous try/catch to make resource management in the presence of exceptions easier. But overall, the D strategy is to have errors be reported by exceptions. A lazy programmer, writing the simplest possible program, may assume success; the exceptions will get thrown by library routines on an error, and the default excedption handler will propagate the error correctly into the OS's error exit codes, etc.

Comprehensive error detection should be accomplished with the simplest possible code. Without having to clutter the code with error handling, IF syscall_return_code ! 0 ... Comprehensive error handling still requires a higher level of expertise, but there D's new features may help.


I think this is all good.

However, I think that scattering try/catch all over the place is quite ugly, leading to cluttered code. Yes, RAII and scope(exit) will help. But occasionally they are not the right thing.

I've written libraries that use exception throwing as thweir error reporting strategy. (I often throw string error messages, and stack up the error messages from most abstract to most detailed.)

I've written test suites for such libraries with exception based error reporting. They often look like long lists of test cases like the following:

expected_exception_caught = 0;
try {
catch (.... ) {
assert("expected this exception");
expected_exception_caught = 1;
if( ! expected_exception_caught ) {
assert( ! "did not get expected exceptrion" );

I will often use macros to make this more compact. Or test suite lists, with an exception expected flag.

I have had reason to compare such code to code using traditional C style exit codes. When a function must return an ext code, it's API becomes clumsy, because it must arrange for the real return value to be returned some other way, typically via a pointer to a return area.

However, the code that exercises the error return code libraries often looks clearner, than the try/catch code.

I'd like the best of both worlds. Conceptually, return a tuple (error_code, return_value). (For that matter, multiple return values are nice.)

However, allow the programmer to omit the capture of the error_code return value. If not captured, throw an exception.

Also, possibly, to prevent the programmer from just capturing an erro_code return value, but doing nothing with it, require a captured error_code to be "consumed" - explictly markeed "I have handled this error."

Possible something like

(Error_Code error_code, char* string) = test_function_1(a,b,c);
assert( error_code && "error was expected" );

Manager vs. Maker's Schedule

I found this on slashdot:




Managers schedule their days in 1 hour chunks.

Makers schedule in chunks of half a day, at least.

A 1 hour meeting in the middle of the morning can blow away an entire half-day, for a Maker.

My team members - the interns and NCGs who have worked with and for me - will vouch that this applies to me. It applies especially to pair programming.

My coping strategy: I often block out time 10-11 and 2-4pm. And then only accept 1 meeting in each afternoon or morning. It still results in fragmented time, but it is better than nothing.

Thursday, July 16, 2009

Andrei Alexandrescu: the Case for the D programming language

=== Andrei Alexandrescu, the Case for D ===

Attended a talk by Andrei Alexandrescu, the famous C++ author. E.g. "Modern C++ Programming". Basically, the guy who wrote the book on many fancy C++ template metaprogramming techniques.

Talk arranged by Gyuszi Suto, Intel DTS. Also attending: Scott Meyers, Effective C++. I did not know he lived in the Portland area.
I used to correspond with Andrei. Had never met in person. Had never corresponded or met Scott.

Alexandrescu's talk was on his work with Walter Bright on the D programming language.

Many neat ideas about programming. In many ways, a nice replacement for C++. One might say, C++ done (almost) right.

Since you can look at the slides, and/or at the webpages, my blog will mainly be on my reactions.

---+ Introduction

I very much liked the introduction "Why all programming language book versions of Hello World are incorrect". Yes, really. Even though Andrei made mild fun of me while doing so.

Brief: K&R hello world does not return corect error codes, whether success or failure. C++, Perl, Python, Java, similarly broken.

D tries to make the easiest code to write also correct. Handle errors. E.g. throwing exceptions, implicitly caught outside main, usual path for library error reporting. E.g. multithread correct by default.

---+ Initialization

Everything is initialized (unless the programmer says not to, by saying "int x = void").

Unclear what default initialization is. Just forcing initialize to zero is better than nothing, but not great.

It took me a surprisngly long time to find this, but http://www.digitalmars.com/d/2.0/type.html.

The default initializer is nearly always 0. Except for floating point, where it is a signalling NaN. And char, where it is 0xFF, 0xFFFF, or 0x0000FFFF, depending.

Enum default initializer is the value of the first member of the enum. (Glew: usually a good idea, but may not be if you want the eum to interface to a hardwsare register. I wonder if this is the default default initializer, but if yiu can overide the default default initializer with an explicit default initializer for enum. Tongue twister.)

I found this post by Walter Bright, D's original designer, explaining: http://dobbscodetalk.com/index.php?option=com_myblog&show=Signaling-NaNs-Rise-Again.html&Itemid=29

D has a feature where all variables are initialized to a default value if no explicit initializer is given. Floating point variables are default initialized to a quiet NaN. Don suggested that instead, the default initializer should be a signaling NaN. Not only that, he submitted patches to the compiler source to do it. Even more significantly, others chimed in wanting it.

Signaling NaNs now play in D the role they were originally created for -- to make the use of uninitialized floating-point variables trivial to track down. They eradicate the nightmares you get in floating-point code when code fails intermittently. The instant recognition of how useful this can be indicates a high level of awareness of numerics in the D community.

OK, so FP are initialized to signalling NaNs. This is good. Although maybe not so good for computer architectures that have deprecated NaN support.

Initializing FP to signalling NaN is safest.

Initializing integer types to 0 is better than nothing. But even this can give errors. Initializing to something that signals wiould be nice. Buit there is no standard for signalling for integer types. I have created template classes that have a valid bit, but it is unreasonable to make that default. I guess that 0 is as good as can be done in present state of the world.

I asked the compiler guy for some project to give me a compiler switch to change all integer types like int to MyTemplate. E.g. Valid. Or to change all pointer types char* to CheckedPointer. But I think that this would disagree with D's "no name hijacking, ever" principle.

You can initialize struct/class members in place:

class C {
int a = 6;


---+ Shell Integration

I like.... I miss Pop-2, Pop-4.

No eval. I guess I am not surprised... although eval is powerful.

---+ System Level Features.

Real pointers, real addresses. Can forge pointers.... "forgiong" means something different to capability based security people.

Modular approach to safety: code compiled with module(safe) cannot do thins like forging pointers; code compiled with module(system) can. Mark Charney and I bounced words at each other like embeddable, subsettable, defeaturable. Mark recently changed a big tool from C++ to C to avoid the overhead of C++ run-time I chatted with BSD kernel guys on same topic. Real systems programming languages allow stuff to be stripped out, like exception handling, like GC.

"Standard library is trusted": this almost set off red flags and alarms for me as a security guy, since trusting printf like code was one of the classic Burrough's era security bugs. But I realized that Alexandrescu's "trusted", from the D language perspective, and my perspective frm the point of view of real hardware security, is very different. Basically, even though print is "trusted" as a library, I am still not going to implement printf as a system call, with the character formatting in the kernel.

---+ Large-Scale Features

True module system. Semantics independent of inclusion order. No name hijacking, ever.
I asked how Michael Feathers' tricks, in Working with Legacy code, to get at seams of legacy, e.g. by #defines, or linker scripts, would work. I understand the motivation for no hijacking. However, I suspect that a more structured way of hijacking might be desirable. Not necessarily order dependent. But more like "In module Foo, please intercept all calls to module Bar, and route them through module BarWithDebugging".
Andrei also kept on about how each module is a file. Since I'm a guty who spent days figuring out how I could cram multiple Perl packages into the same file, I am not so sure I like this module=file enforcement.
I do like how D allows the header to be created automatically from a module. And/or allows separate headers and modules, and enforces compatibility. Avoids writing code twice, but permits if you really want that. (Again: I often create "header only libraries", like Boost. And created the interface generator for iHDL, I hate writing code twice, even declaration/definition.)

Contracts. Yay! I thought at first it was only a subset of Eiffel contracts, but it appears complete.
Scott Myers asked Andrei a gotcha question. D has "protected". But apparently the class invariants were not imposed before/after protected methods. Sounds like will get fixed.

Code annotated with keyword unittest can be embedded, and run before main.
Nice. But I started getting afraid of creeping featurism.
D defines lots of keywords. At the very least, I want to give up on keywords - require keywords to be distinguished by XML. I'm into post-Ascii languages.

const and immutable.
const = "this function in this thread will not modify this const parameter reference. But other threads may be."
immutable = "guaranteed that no function in no thread anywhere will modify"
Afterwards talked with DT guys and Scott Myers about how multithreading means that const optimizations are unsafe. This is not news to me.
Quibbles: immutable seems less string to me that const. Maybe "const const"? Keyword proliferation.
void print(const char[] msg) { ... } - const bridges mutable and immutable, same function handles mutable and immutable arrays. (Similar transitivity for shared)

pure functions - good
Andrei had a contrived example of how you could declare a function pure even if it had local variables. Yawn. I think this must matter to functional purists. But to practical folk, this is obvious.

---+ Programming Styles

Imperative (OOP), functional, generics. Not purist. Not "everything is a ..."

Hash tables are built in or standard.
TBD: optimize ...

---++ struct vs class

struct have value semantics (just like int)
No struct polymorphism.
All other class-like amenities - constructors, op overloading.

classes have ref semantics. => inheritance and dynamic polymorphism

Q: I'm assuming that you can pass structs by ref or pointer... And can similarly pass class objects by value. But the latter may not be true.

---++ Object Oriented

Less runtime reflection than C++ or Java.

Q: is there any compile time reflection or introspection? I.e. can I automatically create a struct printer?

Multiple subtypes, but not inheritamce.

---++ Ranges, not Iterators

I understand motivation for
foreach (i; 2 .. n) result *= i;

although it is unfortunate, since many parallel programming types like me would like to use "foreach" to indicate visiting in no particular order.

---++ Generic Programming

I find the syntax

auto min(L, R)(L lhs, R rhs) {
return rhs < a =" min(x,">

I would like to understand rationale.

Ahhh.... the (L,R) are the "template" type parameters. Now I get it.

Andrei dwelt on the rational for not doing
auto min(L lhs, R rhs)

Syntactic ambiguity.

static if

If a regular function is evaluated in a compile time context, the compiler attempts immediate interpretation.
I meant to ask: compiler people have long known how to do this, but have had a problem: cross compilation. E.g. evaluating sin(1.0) may give different results on machine the compiler is running on than on the target. Heck, even integers, since #birs may vary. I meant to ask if D solved this, a la Java "compile once, run everywhere".

BTW, Scott Meyers asked if D detected integer overflow. No. I wonder if it would be legal to create a compiler than detected overflow. In C/C++, too much code depends on overflow, e.g. shifting bits off top.

---++ Functional Programming

Andrei wants to "erase off the face of the Earth" fib(n)=fib(n-1)+fib(n-2)

Went into a 3 slide jibe about expressing it as a loop rather than functional.

Since I have seen how Donald Knuth has an optimization, that *should* be in modern compilers, to do this, I was unimpressed.

This, and Andrei's rant about how "pure" functions can have temporary variables, leads me to suspect that he has engaged in debates with functional programming, Haskell, bigots.

Hmmm... seems most people haven't seen Knuth's recursive to iterative transformation.

Donald Knuth. Structured Programming with go-to statements. Computing Surveys 6 (4): 261–301. doi:10.1145/356635.356640. http://pplab.snu.ac.kr/courses/adv_pl05/papers/p261-knuth.pdf. http://pplab.snu.ac.kr/courses/adv_pl05/papers/p261-knuth.pdf

Heck, that wasn't too hard to find. It was in wikipedia!

Revisiting Knuth's paper, I am reminded that he did not actually do Fibonacci. While he eliminated the tail recursion completely, he eliminated the interior recursion, but left in a stack. So you would have to go beyond Knuth's paper to convert Fibonacci to iterative, O(n) time, O(1) space. Methinks it should be doable, although I wonder if it can be done without symbolic execution.

---++ Scope statement

void transmogrify() {
string tempfile = "delete.me";
scope(exit) {
if (exists(tempfile))
auto f = File(tempfile, "rw");
... use file f ...

Like C++ RAII

void transmogrify() {
string tempfile = "delete.me";
class FileCleanup {
public: FileCleanup(tempfile) { this->tempfile = tempfile; }
public: ~FileCleanup() { if (exists(tempfile)) remove(tempfile); }
} file_cleanup(tempfile);
auto f = File(tempfile, "rw");
... use file f ...

Except that you can say "If I am calling this destructor for any reason/because of an error/normal success".

OK, that wins. But.... are D's scopes first class? I have a library of C++ classes that are designed to be used as RAII objects.

By the way.... if a function or class defined locally refers to a variable in the surrounding context, and the lifetime survives the context, it can be converted implicitly to a closure. While this is great, it also sounds like a big source of bugs, performance, and memory leaks.

---++ Concurrency

D has a PGAS-like memory model. (PGAS = partitioned global address space, or, as I like to say, privae/global address space. See http://en.wikipedia.org/wiki/Partitioned_global_address_space)

By default all variables are private. Duplicated in all threads. Thread local storage. Even globals.
Hmm, interesting: this pretty much requires a base register for TLS if you want to access (private) globals. I wonder where we will get that base register?

But you can explicitly add a shared keyword.

shared is transitive. Like const leads to C++ const-ipation, shared will have similar issues. D may ameliorate them via its more convenient generic support.
I think that any transitive feature suffers const-ipation. Transitive shared or volatile const-ipation. Sounds disgusting.

Q: how do you say "void f(int a, int b)", where all combinations of shared and private int a and b are allowed?

Sure, *some* folks will just say "cast tio shared". Shared thread-safe code should be able to manipulate private variables, right?

Except... in PGA supercomputer land private and shared may use differeht instruction sets. Truly. In fact,. one of my goals is to allow the same instructions to be used for shared and private.

Andrei did not talk abot the memory ordering model. Heck, if you define a memory ordering model, on a weakly ordered machine even ordinary instructions used to access shared data must have fences added all over. Perhaps this is why we should encourage transitive shared const-ipation.

I liked Andrei's last slide:

Process-level concurrency with OS-grade isolation: safe and robust, but heavyweight
Andrei's aside: UNIX: it worked for years.

Thread-level concurrency with shared memory: fast, but fraught with peril
This is almost as pithy as Ashleisha Joshi's "Shared memory is a security hole".

Typesystem-level isolation: safe, robust, and lightweight
Although it may lead to Transitive shared or volatile const-ipation

---+ Conclusion

D is definitely interesting. Whether or not it will take off, I do not know. But it certainly has some nice features. If irt were ubiquitous, I would use it.

My biggest issues with D, apart from the fact that it is not ubiquitous (which is chicken ad egg) are

a) Keyword proliferation. I've said before: I am ready to transition into languages with XML tags around keywords.

b) Embeddability/subsettability/defeaturability. I would like to be able to, e.g. turn off the GC, or turn off the exception handling. I like the idea of a high level systems programming language, but the OS kernel programmers I know don't even use C++. They use C. It is not clear that D solves all the problems that prevent so many OSes from using C++ in the kernel.

Wednesday, July 15, 2009

Calendars and Phones

A friend just got an iPhone. Primary uses, apart from as a cell phone and music player: PDA stuff like calendaring.

Now she keeps her calendar on her iPhone. Both personal/family, and work related.

Her employer uses Outlook Exchange Calendaring. But they do not allow calendars to be synchronized with personal calendars outside of work. So my friend simply does not use the company calendar. She says that co-workers complain, when they arrange meetings with her that appear to be free on her company calendar, which is empty, but which she cannot make because of conflicts. But she says that is so much more important to her to be able to manage her personal and family life in the same place as her work meetings, that she is willing to put up with the loss of the shared work calendar.

Perhaps I should mention that she works, not exactly part-time, but very flexible hours. She is constantly managing family commitments, getting kids to events, as well as weekend and evening work assignments. Her day is not neatly partitioned into work and non-work.

  • People want both personal and work calendars on their devices like phones.
  • Corporate security rules get in the way.

Restart Time Matters

This morning, after a 6-7am meeting, my work laptop PC hung up. Restarting it highlights another performance issue.

The original hang was not bad, but annoying. The Nvidia nView Desktop Manager, that I use to change my screen settings - e.g. to go into and out of reduced resolution, so that the folks in Israel could see my slides - left a "turd" on my screen. A "just wait" box was thrown up, and stayed around for 15 minutes, after all Nvidia tool was closed down, after I left to do something else for a while. Past experience has shown that occasionally just waiting a few minutes leads to such turds being cleaned up - but not this time.

(I suspect that I could have killed rundll32.exe from the process (not task) manager, but I am always nervous about killing processes that I do not know exactly what they do. Especially sometrhing like rundll32.exe, that gets used by may facilities.)

I could have left this turd around and kept going. But it is annoying to have a small rectangular area on your screen that won't redraw. So I decided to restart my PC, in the hopes that (a) that would clean up the turd, and (b) I could go and have breakfast while waiting for the restart.

So, I hit restart. Waited a bit. Walked away.

After my beans and tomatos on toast (there, that proves I'm a Brit, even if my accent is more American than Canadian after 24 years in the US), I went back to check. I expected that I might have to type in my hard disk password (for PGP Full Disk Encryption).

But, surprise! Actually, no surprise, just disappointment. Windows' restart had hung, waiting for me to say "OK" to killing some application. There was a flurry of such kill dialog boxes, some with timeouts, but enough without timeouts that I had to sit and wait. The rundll32.exe that I suspect was used by Nvidia, that I suspect was causing the problem, was one of them.

OK, answer all of these dialog boxes. Now, surely, I can walk away...

Nope. Come back another 15 minutes later, to see a blank screen. Not dead - when I moved the mouse, and typed ctl-alt-del, suddenly things started proceeding to shutdown again. But, it certainly was hung waiting for something to be rejiggered or unblocked.

Finally, it reboots. Watch the reboot. Type in the PGP full disk encryption password.

What a loss! What a lost opportunity! If I could have overlapped the reboot with eating breakfast, I would have had half an hour more for work.

MORAL: even the time to resatrt matters. Automation, to allow a completely automatic reboot without having to type in passwords, or anticipation - having the reboot subsystem ask, in advance "Do you want to end all processes that aren't responsive", rather than waiting to ask me about them one by one. Automation and anticipation, even of something like the reboot process, would be a good thing.

At work we often say things like "Surely people don't care about the speed of rebooting after a Machine Check". Wrong! Maybe we don't care about is as *much* as we care about normal operation. But the performance (and automation) of anything we do a lot matters. And, on Wintel PCs, rebooting happens quite a lot.


There may be another moral here. I like the idea of microrebooting. Perhaps I couold microreboot the display manager, and/or otherwise tell the display manager that the non-responsive process's window should not be redrawn.

Sunday, July 12, 2009

Moving to the Net: Encrypted Execution for User Code on a Hosting Site

I'm gravitating towards establishing at least 2 net.presences - 1 on a shared hosting site, the other on something like Amazon ECC where I get to administer the (virtual) machine. Working on the assumption that it is unlikely that both will go down simultaneously. (The sysadmins of the shared hosting site are more likey than I am to respond quickly to some spreading net.malware that requires immediate patching.)

Plus, of course, the usual personally owned computers/servers.

The prospect of moving my "home" off computers I own to computers that hosting service provides is a bit worrisome. E.g. would I/should I trust storing my TurboTax on a hosting site? Running Gnu Cash?

Put that way, it is more attractive to use net.storage like Amazon S3 for encrypted storage, but to always try to do the secure manipulation on computers I own.

However, just think of how already all my financial information is spread across various provider websites.

Anyway... Thinking about virtual hosting with Amazon S3... How could you have enough trust to do sensitive work on such a system? Or, rather, could we build systems where the hosting sysadmins would not be able to look at the data their hosting clients are working with?

It's a lot like DRM, Digital Rights Management, the sort of thing that motivated LT. However, instead of trying to prevent the user and owner of a PC from accessing unencrypted data on his own PC, what I would like to do in the hosting space is prevent the provider of the hosting service from accessing unencrypted data that belongs to the person renting the hosting service.

Let's imagine that this could be done. In some ways it might prevent one of the IMHO big advantages of centralized hosting, which is that you could have a sysadmin scanning for problems. The sysadmin might still be able to look for usage patterns that indicate a malware breakin, but the sysadmin would certainly have an easier time of it if he could look at the data.

It would also get in the way of another of the IMHO big advantages of centralized hosting: data de-duplication. (That seems to be the new term for what I have mainly been calling content based filesystems, inspired by rsync, Monotone, git, etc.)

E.g. I might reasonably want to store a copy of every piece of software I have ever bought on my net.storage-system. E.g. I might want to store every blamed copy of Windows I have ever had a license too. So that I can restore to the PCs that they run (ran) on, whenever... Subject to licensing, of course. Now say that the host is a big site, like Google or Yahoo or Amazon. Millions of Windows users might be doing the same thing. The opportunities for data de-duplication are large. But if each user's data is separately encrypted, deduplication would not work.

These arguments lead me to suspect that completely encrypting all information, in a way that prevents the host from looking at it, is undesirable. At the very least, optionally encrypting on a file by file basis would be desirable. E.g. I might encryt my TurboTax files, but not my saved copy of Office 95.

Okay, okay... now, can we think of ways of preventing the sysadmins managing a hosting service from having full access to at least some user data?

Start off with encrypting data on disk. Encrypt it in main memory. Encrypt it into the cache hierarchy.

Decrypt only when moving between cache and registers. (Or possibly in the innermost levels of the cache hierarchy (L0, L1).)

How do you prevent the OS from looking at the unencrypted register state? On an interrupt, re-encrypt the registers as they are saved to memory.

Note that this means that you probably won't do "efficient" interrupts, that leave as many registers as possible live. You have to save them.... or, at least, you might encrypt them in-situ. Or have hardware "lock" them in-situ, and then do a lazy save to the encrypted area. But, you can't let the host look at the unencrypted registers.

Debuggers would have to be disabled while manipulating encrypted content. Or... you could allow debugging, but "lock" registers containing encrypted content. The debugger would either not be allowed to look at such registers, or might be allowed to look at "obfuscated" contents. I say "obfuscated" because, while it might be challenging to encrypt a 64B cache line, it will be almost impossible to encrypt 64bit (32bit, or 8bit) registers. Despite these attempts, the debugger can probably infer register contents (for all bits do: jmp if set). So, while I can imagine ways of making it harder for a debugger to observe sensitive data, at the very least debugging would be a very high bandwidth covert channel. Just like reading the time or reading a performance counter, debugging should be a facility that requires special permission.

Besides, why would a hosting sysadmin want to debug a user's application? The only legitimate reasons I can imagine involve trying to figure out if a user has been compromised by malware. Some intrusion detection systems singlestep execute in a debugger.

So, I am formulating a vision where the OS may run user tasks, but the user tasks may establish encryption based security that prevents the OS from looking at the user task data. There is some sort of key inside the processor that the OS cannot look at. This key is used to encrypt and decrypt data as it flows between processor registers and cache/memory. Registers are saved/restored encryted. Hardware probably defines a save area for all register state. Registers may be locked against the OS, and/or for lazy encrypted save/restore.

System calls? The classic UNIX system calls - read, write, etc. - do not require the OS to interpret the memory contents. Thedy can handle encrypted as well as unencrypted data.

Bootstrapping? How do you bootstrap this? How do you start a user process, and give it a private key, in a manner that the OS cannot see? Basically, you will have to have a small trusted kernel, trusted bootstrap. It doesn't have to be done once and only once: I believe there are already many patents on late secure bootstrap (with me as one of the inventors). So, the untrusted OS can be running, and can invoke a small trusted kernel. This small trusted kernel must have its own private key(s), and can lookup tables of user private keys it maintains on disk, and/or obtain such keys across the network, after the appropriate secure handshake and communication. For my usage model - wanting to execute my stuff at a hosting site - I would probably prefer the "access across the network" approach. However, if I wanted to run cron jobs on the host, host-side storage of user keys might be necessary. It would be necessary for the user to trust this kernel, even though the user did not trust the OS. But this is the same issue as for DRM.

This trusted kernel could be microcode. But it could just as well be user code, with the trusted key hardwired in.

This is all very much like DRM. But it differs in emphasis: in DRM, you typically only want one or a few applications - the audio or video player - to be protected from everything else running on a system. Whereas here I have described how to run user applications on a hosting site, protected from the host OS (and any other software). In this usage model it is desirable to be able to protect almost any application from the OS. The DRM audio and video subsystem can be a special case; but this hosting application wants to truly be part of the OS. Or, rather - it wants to NOT be poart of the OS, but wants to be a facility available to most processes and users under the OS.

This works for generic user processes on a generic OS, e.g. a UNIX user account. It is not just limited to virtual machines.

Can the OS intercede, e.g. by munging with the interrupt return IP or virtual memory? While we probably should not prevent the OS from such manipulations, we can prevent the OS from gaining access to sensitive data. The state and/or code being returned to can be authenticated. Untrusted code can be prevented from ever using the trusted keys.

It would be straightforward to similarly protect the code from the OS. But, while DRM may want that, I think that is much less necessary for this hosting usage model. Generic code, manipulating sensitive data. E.g. I want "cat" and "grep" to be securable. I envisage something like

sensitive-data gnu-cach dump account | sensitive-data grep my-hotel-bill > file-whose-sensitive-stuff-I-can-only-view-on-my-client


sensitive-data gnu-cach dump account | sensitive-data grep my-hotel-bill | sensitive-data -stdout unencrypted more

Friday, July 10, 2009

Yahoo Geocities closing - Don't Trust Free Websites

Yahoo Geocities is closing down.

For several years this was the home of my resume, and other stuff: http://www.geocities.com/andrew_f_glew.

Yahoo gives you the option of paying 5$/month for webhosting, or saving your files. Yahoo requires you to save the files one at a time, clicking Save As on a menu. There is no ability to download all of your files as a tarball or other archive. There is no WebDav access, or other filesystem-like access. I.e. there is no easy path to automatically transferring a large website.

Fortunately, I did not have many files on Yahoo.

Even more fortunately, I abandoned Yahoo Geocities a few months ago, and moved all of my stuff to Google Docs. http://docs.google.com/Doc?id=dcxddbtr_6dvpxg2cj&hl=en

However, there is no guarantee that Google Docs will remain free, or even available, forever.

Google Docs doesn't appear to have an easy way of downloading all files. Who was it that provided such a facility? One of the wiki sites that closed down?

Coincidentally, I have been shopping for a hopefully more permanent home for me and my files on the web. I've been looking at Amazon S3 and ECC. Yes, I am willing to pay.

Amazon ECC is good for virtual hosting. However, while I like being root, I would also like an ordinary user account on some UNIX system. I would like somebody else to be responsible for keeping up to date on security patches. Ideally I would like to run a wiki on such an account, but I want the wiki to run, not even as me, but as a Mini-Me, with even less privilege.

I guess I want it all: the convenience of having someone else sysadmin, but with the ability to run a small number of personal web services in deprivileged accounts.

For that matter, I'd like the hosting system to run a non-standard OS like FreeBSD, and a non-standard instruction set. ARM? Power? Is it security through obscurity?

As for my free websites:

* I learned that I still hasd a Yahoo mail account from long ago. Once again, no bulk download, unless I upgrade to pay, at which point I get POP.

* Google Docs - no automated access

* Google Mail - I have IMAP and POP access. I think I better start backing it up better.

Backing it up to S3? Wherever.

Just like there is a period of recent history that was lost, because information stopped being recorded on paper and started being recorded on quickly obsolete digital formats such as tape, floppies, etc., now we are traversing the stage where history is lost because of service providers closing down. One can only hope that we will come out the other end with storage as a commodity, with standard procedures for migration.

Sunday, July 05, 2009

Gilding the Lily

By the way, I recognize that in the preceding posts I risk the sort of gilding the lily that so many software teams do when they define many, many, different comment keywords.

    E.g. http://www.cs.clemson.edu/~mark/330/colwell/p6-coding-standards.html.
    This is relatively small - only 4 primary keywords, with 3 secondary.
    Other example standardsare much larger.

    - This "flag" is used to indicate items that require later definition. It stands for To Be Defined (or Determined). The ensuing comment should provide more particulars.
    - This "flag" is used to indicate items that have been defined and are now awaiting implementation. It stands for Not Yet Implemented. The ensuing comment should provide more particulars.
    - This "flag" is used to indicate the existence of an explicit machine dependency in the code. Again, the ensuing comment should provide more particulars.
    - This "flag" is used to indicate the existence of a bug of some form. It should be followed immediately in the comment (on the same line) with one of the keywords incomplete, untested, or wrong, to indicate its type along with a more descriptive comment if appropriate. If none of the keywords is applicable, some other descriptive word should be used along with a more extensive comment.

The most comonly used such keyword is probably
  • TBD or TODO
and other common keywords include
  • NYI: not yet implemented
  • BUG:
  • NOTE:
But frequency of use rapidly goes down. E,g. although I approve of the concept of MACHDEP, I hardly ever used it during the P6 project, and I had totally forgotten about it.

Comments such as these work best when automated.

E.g., although I hate doxygen, doxygen comments get automatically extracted, and are hence more likely to be seen and corrected.

E.g. ditto Perldoc. Again, I find perldoc makes code harder to read.

Preparing this blog, I found
"The Fine Art of Commenting",
Bernhard Spuida, 2002.

Spuida mentions several such systems, including Microsoft's C# .NET system,
where XML tags are embedded in comments.
As indicated by Spuida's title "Senior Word Wrangler",
this motivation seems to be mainly about preparing documentation.

It was fun to re-find the Indian Hill Coding Standards

Every few years I seem to go through a spate of reading of coding standards.
I think every programmer should be familiar with several, many, different coding standards.
So that he or she can pick parts that work well,
for projects that need those parts.

Final words:
Brian Marick's lynx.
Hypertext embedded within the code itself.
My own hope of wiki files in the source trees,
and/or linked to from the code itself.
Programming languages using extensible notations, such as XML,
allowing arbitrary structured annotations.

Test passed/failed

More wrt test monitoring.

I concluded the last post with (slightly extended):
TEST RUN: test1
TEST CHECK OKAY: test2 check1
TEST PASSED: test2.1
TEST END: test2
Implying 2 top level tests, test1 and test2. Test1 is a "monad", reported by TEST RUN ooutside of START/END. Test2 is bracketed by START/END, and contains subtest 2.1.

When I started testing seriously, I thought that all tests could be classified passed/failed. That is always a worthwhile goal. If it could be accomplished automatically, it might suggest what we might express in pseudo-XML as:

<test name="test1" result="passed"/>
<test name="test2">
<test-check result="ok" test_name="test1" check_name="check1"/>
</test name="test2" result="passed">

My pseudo-XML allows attributes on the close. Without this, one might just expeft a TEST PASSED message immediately before the close.

However, over the years I have learned that things are not always so clear cut. While it is always best to write completely automated tests that clearly pass or fail ...

Sometimes you write tests and just run them, but do not automatically determine pass or fail.

Sometimes manual inspection of the output is required.

Sometimes you just want to say that you have run the test, but you have not yet automated the checking of the results... and sometimes, given real-world schedule pressure, you never get around to automating the checking. In such cases, IMHO it is better to say

TEST TBD: have not yet automated results checking yet

than it would be to just omit the test.

Oftentimes, the fact that a test has compiled and run tells you something. Or, rather: if the test fails to compile or run it tells you that you definitely have a problem.

Sometimes you can automate part of a test, but needmanual inspection for other parts. In this case, I think reporting "TEST PASSED" is dangerously misleading:


or, better

TEST TBD: foo: need manual inspection of rest of test output

I think that "TEST PASSED" tends to imply that the entire test has passed. If you say "TEST PASSED" without a label test name, it tends to imply that the enclosing test has passed.

Better to say

TEST PASSED: sub-test bar of test foo
TEST TBD: foo: need manual inspection of rest of test output

I have recently started using other phrases, such as "TEST CHECK"

TEST CHECK OKAY: foo check1
TEST PASSED: sub-test bar of test foo
TEST TBD: foo: need manual inspection of rest of test output

Q: what is the difference between a TEST PASSED: subtest and a TEST CHECK OKAY (or TEST CHECK PASSED)? Not much: mainly, the name tends to imply something about importance. Saying that a test or subtest passed seems to imply that something freestanding has passed. A check within a test seems naturally less important.

This is along the lines of assertions. Some XUnit tests count all assertions passed. While this can be useful - particularly if some edit accidentally removes thousands of assertions - I myself have found that the number of assertions gives a false measure of test effort.

It may be that I am conflating "test" with "test scenario". A "test scenario" or "test case" may be subject to thousands of assertions. Particularly if the asserts are in the infrastructure. But I really want to count test cases and scenarios.

Here's one reason why I try to distinguish #tests passed from #checks performed:
  • my test monitor performs consistency checks such as tests_passed = test_cases, tests_started = tests_ended, etc.
What I really want is things like
  • Number of tests that had positive indication of complete success - tests passed. (Or, at least, success as complete as any test can indicate.)
  • Number of tests that had postive indication of a failure or error.
  • Similarly, warnings.
  • Number of tests that had no positive indication - a monad "TEST RUN" message was sen, or perhaps a TEST START/END pair, but no positive indication.
  • Number of tests where failure can be inferred - e.g. TEST START without a corresponding test end.

Test monitoring keywords

Playing with my text-test-summarizer-monitor, a little Perl/Tcl script that looks at the output of tests, filtering things likes tests started/ended, passed/failed, looking for various common indications of problems. Throws up a Tcl widget. The closest thing I have to a green bar.

Here's an annoyance: my test may look like

TEST END: test1
TEST CHECK OKAY: test2 check1
TEST END: test2

Oftentimes I like to announce "TEST STARTED" and "TEST ENDED". (I just had to extend my scripts to handle both START and STARTED. This is useful in case the test crashes in the middle, and you never get to the test end.

However, occasionally the test infrastructure does not allow this. Occasionally I just say, once, at the end "I ran this test, and it passed". That's what I mean above by the "TEST END: test1" without the corresponding TEST START.

In XML, this would be simple:

<test name="test1"/>
<test name="test2">
<test-check result="ok" test_name="test1" check_name="check1"/>
</test name="test2">

  1. I added an attribute to the closing, </test name="test2">. Although not part of standard XML, occasionally this has helped me. I call this, therefore, pseudo-XML
  2. Note that test is available in both and ... form
Note that this is pretty verbose. Although human readable, I would not call it human friendly.
Because the pseudo-XML is not so human friendly, I often prefer to print messages such as

TEST END: test1
TEST CHECK OKAY: test2 check1
TEST END: test2

But here I run into terminology: I don't have a natural way of having a message, that is not confused with START/END.

First: I am reasonably happy with brackets {BEGIN,START}/{END,FINISH},
and {STARTED}/{ENDED,FINISHED}. English grammar, how inconsistent.
I want to tolerate all of these, since I have used all from time to time,
and regularly find a test suite falsely broken if I assume that I have consistently used START and not STARTED.

(I'm going to arbitrarily reject TEST COMPLETED. Too verbose. Especially the dual TEST INITIATED. At least, I'll eject until it seems I encounter it a lot.)

But a TEST END without a TEST START is too confusing. I need an English phrase that doesn't need a corresponding start. Let's try a few:

  • TEST PASSED: test name.
      with the corresponding
    • TEST FAILED: test name

    • However, there might be some confusion because I definitely want to use TEST PASSED/FAILED withing TEST START/END. See below.

  • TEST RESULT: test name
      Similarly, there might be some confusion because I might want to use TEST RESULT within TEST START/END.

  • TEST RUN: test name
      Nice because of the possible dual TEST NOT RUN

I think that I am arriving at the conclusion that any of the above, outside a TEST START/END, make sense, and should be considered equivalent to &lttest .../>

I am not currently checking for the proper nesting of tests, but I could be.

I think that it would be good to have a count of top level tests, either TEST START/END brackets or TEST outside such brackets, but ignoring stuff within the brackets.

Giving me

TEST RUN: test1
TEST CHECK OKAY: test2 check1
TEST END: test2

Thursday, July 02, 2009

Hyper Fan Mode in the Morning

Oh, how I have come to dread the sound of fans on my laptop in the morning!:

Walk to my office. Dock my HP 8530W laptop. Power on.

And hear the dread sound of fans going full bore. Laptop hung.

I dread this sound, since I know that it will take 10-15 minutes to get o a place where I can start work.

It is curious that both my much maligned Toshiba M400 tablet, and my employer provided HP 8530W laptop, have the same "ultra-fan" mode symptom of death. Since they have different OSes - Vista in the former, XP in the latter - it suggests that the problem is not the OS per se, but some common component. Since they have different manufacturers, it suggests that it is not manufacturer specific. Probably some common software, such as a Microsoft power management utility common to both OS versions.

Wednesday, July 01, 2009

"Moded" Keys in FVWM

I am highly encouraged to use the FVWM2 X Window Manager at work on Linux.
Not exactly forced, but other options are a hassle.

Finally got around to providing some key bindings.

In particular, this hack to enter a "mode" where a window's size is changed by repeated key presses
(1) define a menu with hotkeys calling a function
(2) have the function perform the action (e.g. narrowing the window), and then re-invoke the menu with the hotkeys.

I haven't seen it described in the FVWM2 FAQ, although I got the idea from a FAQ suggestion wrt using hotkeys.


## Andy Glew stuff
AddToMenu MainMenu
+ "Glew" Popup Glew
AddToMenu Window_Ops
+ "Glew" Popup Glew
+ "Ident" Module FvmwIdent

Key F1 A SC Menu Glew
Mouse 1 A SC Menu Glew


AddToMenu Glew "Glew" Title
+ "&RaiseLower" RaiseLower
+ "Resi&ze" Resize
+ "&PlaceAgain" PlaceAgain
+ "" Nop
+ "Maximize (toggle)" Maximize
+ "Maximize" Maximize true
+ "&UnMaximize" Maximize false
+ "Maximize 1850px1100p" Maximize true 1850p 1100p
+ "Resize 1850p 1100p" Resize 1850p 1100p
+ "" Nop
+ "MainMenu" Menu MainMenu
+ "Window_Ops" Menu Window_Ops
+ "Glew_Window_Sizes" Menu Glew_Window_Sizes
+ "&Keyboard commands" Menu Glew_Window_Manager_Keyboard_Commands_Menu
+ " Glew_Window_Sizing_From_Keyboard" Menu Glew_Window_Sizing_From_Keyboard
+ " Glew_Window_Moving_From_Keyboard_Menu" Menu Glew_Window_Moving_From_Keyboard_Menu
+ "" Nop
+ "FvwmConsole" Module FvwmConsole -fg black -bg green3
+ "Restart Fvwm2" Restart fvwm2
+ "" Nop
+ "Ident" Module FvwmIdent

AddToMenu Glew_Window_Sizes "Glew_Window_Sizes" Title
+ "1000x660 toshiba 3505" Maximize true 1000p 700p
+ "1000x700 largest compaq evo n115" Maximize true 1000p 700p
+ "1024x768" Maximize true 1024p 768p
+ "1260x956 Largest Thinkpad T42p" Maximize true 2900p 956p
+ "1350x1000 Largest Thinkpad T43p" Maximize true 1350p 1000p
+ "1350x980" Maximize true 1350p 980p
+ "1400x1050" Maximize true 1400p 1050p
+ "1500x1100" Maximize true 1500p 1100p
+ "1550x1100 largest sony grx580" Maximize true 1500p 1100p
+ "1550x1100 largest thinkpad a21p" Maximize true 1500p 1100p
+ "1600x1200" Maximize true 1600p 1200p
+ "1800x1100" Maximize true 1800p 1100p
+ "1850x1100" Maximize true 1850p 1100p
+ "1920x1200" Maximize true 1920p 1200p
+ "2700x1000" Maximize true 2700p 1000p
+ "2900x1120 2 monitors at work" Maximize true 2900p 1120p
+ "2900x950" Maximize true 2900p 950p


AddToMenu Glew_Window_Manager_Keyboard_Commands_Menu "Glew Window Manager Keyboard Commands Menu" Title
#+ "Glew_Cursor_Moving_From_Keyboard" Menu Glew_Cursor_Moving_From_Keyboard_Menu
+ "Glew_Window_&Sizing_From_Keyboard" Menu Glew_Window_Sizing_From_Keyboard
+ "Glew_Window_&Moving_From_Keyboard_Menu" Menu Glew_Window_Moving_From_Keyboard_Menu


# Hack to enter a "mode" where a window's size is changed by repeated key presses
# (1) define a menu with hotkeys calling a function
# (2) have the function perform the action (e.g. narrowing the window),
# and then re-invoke the menu with the hotkeys.

# Using the "c" client specific units, since they work well with emacs,
# even though they are NOT exactly characters.
# TBD: make use pixels.

DestroyMenu Glew_Window_Sizing_From_Keyboard
AddToMenu Glew_Window_Sizing_From_Keyboard "Glew_Window_Sizing_From_Keyboard" Titles
+ "&Wider" Function Glew_Window_Sizing_From_Keyboard_Function w+1c w+0c
+ "&Narrower" Function Glew_Window_Sizing_From_Keyboard_Function w-1c w-0c
+ "&Taller" Function Glew_Window_Sizing_From_Keyboard_Function w+0c w+1c
+ "&Shorter" Function Glew_Window_Sizing_From_Keyboard_Function w-0c w-1c

DestroyFunc Glew_Window_Sizing_From_Keyboard_Function
AddToFunc Glew_Window_Sizing_From_Keyboard_Function
+ I Resize $0 $1
+ I Menu Glew_Window_Sizing_From_Keyboard


# Similar hack to move windows from keyboard

SetEnv Stepsize_for_Glew_Window_Moving_From_Keyboard_Menu 10p

DestroyMenu Glew_Window_Moving_From_Keyboard_Menu
AddToMenu Glew_Window_Moving_From_Keyboard_Menu "Glew_Window_Moving_From_Keyboard_Menu" Titles
+ "&Right" Function Glew_Window_Moving_From_Keyboard_Function w+$[Stepsize_for_Glew_Window_Moving_From_Keyboard] w+0p
+ "&Left" Function Glew_Window_Moving_From_Keyboard_Function w-$[Stepsize_for_Glew_Window_Moving_From_Keyboard] w-0p
+ "&Down" Function Glew_Window_Moving_From_Keyboard_Function w+0p w+$[Stepsize_for_Glew_Window_Moving_From_Keyboard]
+ "&Up" Function Glew_Window_Moving_From_Keyboard_Function w-0p w-$[Stepsize_for_Glew_Window_Moving_From_Keyboard]

DestroyFunc Glew_Window_Moving_From_Keyboard_Function
AddToFunc Glew_Window_Moving_From_Keyboard_Function
+ I Move $0 $1 warp
+ I Menu Glew_Window_Moving_From_Keyboard_Menu

## Vain attempt to make stepsize adjustable.
## TBD: fix
#AddToMenu Glew_Window_Moving_From_Keyboard_Menu "Glew_Window_Moving_From_Keyboard_Menu" Titles
#+ "&faster" Faster_Stepsize_for_Glew_Window_Moving_From_Keyboard
#+ "&slower" Slower_Stepsize_for_Glew_Window_Moving_From_Keyboard
#DestroyFunc Faster_Stepsize_for_Glew_Window_Moving_From_Keyboard
#AddToFunc Faster_Stepsize_for_Glew_Window_Moving_From_Keyboard
#+ I SetEnv Stepsize_for_Glew_Window_Moving_From_Keyboard 10p
#+ I Menu Glew_Window_Moving_From_Keyboard_Menu
#DestroyFunc Slower_Stepsize_for_Glew_Window_Moving_From_Keyboard
#AddToFunc Slower_Stepsize_for_Glew_Window_Moving_From_Keyboard
#+ I SetEnv Stepsize_for_Glew_Window_Moving_From_Keyboard 1p
#+ I Menu Glew_Window_Moving_From_Keyboard_Menu


#BROKEN: # Similar hack to move cursor from keyboard
#BROKEN: # DOES NOT WORK, since menus affect cursor position
#BROKEN: SetEnv Stepsize_for_Glew_Cursor_Moving_From_Keyboard_Menu 10p
#BROKEN: # TBD: arrange to change
#BROKEN: DestroyMenu Glew_Cursor_Moving_From_Keyboard_Menu
#BROKEN: AddToMenu Glew_Cursor_Moving_From_Keyboard_Menu "Glew_Cursor_Moving_From_Keyboard_Menu" Titles
#BROKEN: + "&Right" Function Glew_Cursor_Moving_From_Keyboard_Function +$[Stepsize_for_Glew_Cursor_Moving_From_Keyboard] +0p
#BROKEN: + "&Left" Function Glew_Cursor_Moving_From_Keyboard_Function -$[Stepsize_for_Glew_Cursor_Moving_From_Keyboard] -0p
#BROKEN: + "&Down" Function Glew_Cursor_Moving_From_Keyboard_Function +0p +$[Stepsize_for_Glew_Cursor_Moving_From_Keyboard]
#BROKEN: + "&Up" Function Glew_Cursor_Moving_From_Keyboard_Function -0p -$[Stepsize_for_Glew_Cursor_Moving_From_Keyboard]
#BROKEN: DestroyFunc Glew_Cursor_Moving_From_Keyboard_Function
#BROKEN: AddToFunc Glew_Cursor_Moving_From_Keyboard_Function
#BROKEN: + I CursorMove $0 $1
#BROKEN: + I Menu Glew_Cursor_Moving_From_Keyboard_Menu


# From the fvwm2 FAQ
#7.9 Moving the mouse/focus/page with the keyboard.
# Try these key bindings for mouse movement:
# # shift- to move a few pixels
# Key Left A S CursorMove -1 0
# Key Right A S CursorMove +1 +0
# Key Up A S CursorMove +0 -1
# Key Down A S CursorMove +0 +1

# Glew version of these bindings

# - I wish I had a modifier reserved for the X/VNC/FVMW2 window manager
# but since I am running Emacs meta is usually taken
# and since I am running under VNC on Windows the Windows key is taken
# (Moral: needone key per layer of Window Manager)

# so, anyway, the closest thing I have to a free modifier is meta-arrow

# shift- to move a few pixels
Key Left A M CursorMove -1 0
Key Right A M CursorMove +1 +0
Key Up A M CursorMove +0 -1
Key Down A M CursorMove +0 +1