The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Monday, January 30, 2012

Version controlling VCS metadata

My project .hg/hgrc is growing pretty long.

It needs to be under version control.

Unfortunately, hg does not, refuses to, version metadat such as .hg/hgrc.  Security.  Since .hg/hgrc may contain executable hooks...

Seems to me there should be a better way. E.g. disabling it's executability, although allowing it to be versioned.

Or - since some (but by no means all) such metadata is per repo, and is meaningless in a clone, perhaps there should be a meta-VCS system.  This is fairly natural for me, since i have had my repos nested within each other for decades.

Unfortunately, not so natural for Mercurial.

Symlinks to URLs


As people have explained, although you can create a symlink that links to an URL, it does not accomplish much.  UNIX is not smart enough to look at the link content, recognize it as an URL, and then do something appropriate to it.

Nevertheless, I occasionally do create symlinks to URLs.  Mainly just as documentation - I am used to going to looking at ~/links/* when I forget where, on the filesystem, I have stored something.  It's not much of a stretch to do the same thing for URLs.  However, doing so tends to break consistency checks and other stuff that assumess or requires that symlinks point to valid filesystem locations.

Furthermore, I have made some of my own tools smart enough to recognize that, if they see a symlink where an URL is needed, that they should dereference the symlink and look at its contents.  More and more of my tools treat URLs and paths in the filesystem as interchangeable.  Assuming, of course, that the URL protocol has appropriate methods for reading and writing.

Saturday, January 28, 2012

[[Write coalescing]] is the term some GPUs, notably AMD/ATI and Nvidia, use to describe how they, umm, combine or coalesce writes from different N different SIMD threads into a single, or at least fewer than N, accesses. There is also [[read coalescing]], and one can imagine other forms of coalescing, such as atomic fetch-and-op coalescing.

At AFDS11 I (Glew) asked an AMD/ATI GPU architect
"What is the difference between [[write coalescing]] and [[write combining]]?"

He replied that [[write combining]] was an x86 CPU feature that used a [[write combining buffer]],
whereas [[write coalescing]] was a GPU feature that performed the optimization between multiple writes that were occurring simultaneously, not in a buffer.


Since I (Glew) had a lot to do with x86 write combining
- arguably I invented it on P6, although I was inspired by a long line of work in this area,
most notably the [[NYU Ultracomputer]] [[fetch-and-op]] [[combining network]]
- I am not sure that this distinction is fundamental.

Or, rather, it _is_ useful to distinguish between buffer based implementations and implementations that look at simultaneous accesses.

However, in the original NYU terminology, [[combining]] referred to both:
operations received at the same time by a switch in the [[combining network]],
and operations received at a later time that match an operation buffered in the switch,
awaiting either to be forwarded on,
or a reply.
(I'm not sure which was in the Ultracomputer.)

A single P6 processor only did one store per cycle, so a buffer based implementation that performed [[write combining]] between stores
at different times was the only possibility. Or at least the most useful.
Combining stores from different processors was not done (at least, not inside the processor, and could not legally be done to all UC stores).

The NYU Ultracomputer performed this optimization in a switch for multiple processors,
so combining both simultaneous operations and operations performed at different times
was a possibility.

GPUs do many, many, stores at the same time, in a [[data memory coherent]] manner.
This creates a great opportunity for optimizing simultaneous stores.
Although I would be surprised and disappointed to learn that
GPUs did not combine or coalesce
(a) stores from different cycles in the typically 4 cycle wavefront or warp,
(b) stores from different SIMD engines, if they encounter each other on the way to memory.

I conclude therefore that the difference between [[write combining]] and [[write coalescing]] is really one of emphasis.
Indeed, this may be yet another example where my
(Glew's) predilection is to [[create new terms by using adjectives]],
e.g. [[write combining buffer]] or [[buffer-based write combining]]
versus [[simultaneous write combining]] (or the [[AFAIK]] hypiothetical special case [[snoop based write combining]]),
rather than creating gratuitous new terminology,
such as [[write combining]] (implicitly restricted to buffer based)
versus [[write coalescing]] (simultaneous, + ...).

= See Also =

This discussion prompts me to create

* [[a vocabulary of terms for memory operation combining]]

Project structure: code, tests, external depndencies

Say you have a project foo, which I will call .../foo, to emphasize that there is probably a project directory somewhere in the namespace.
Where do you put the tests?  It's nice to put them in .../foo/tests, so that when you checkout the project, you get the tests as well:
    by which I mean
    but  emphasizing the full context via .../ and .../foo/

It's also good to minimize the external dependencies of .../foo.

But what if the tests have more external dependencies than the non-test part of the project.  Should you increase the external dependencies of the non-test part just to have the tests?

Conversely, should you make it harder to write the tests by forbidding external dependencies in them?  Tests are hard enough to write, and they often depend on extra libraries that the source code per se does not?

More specifically, should you get stuff that has increased external dependencies if you check out .../foo and .../foo/tests comes along for the ride?

If you don't want the extra dependencies in .../foo/tests to come along with .../foo, you might structure them as separate modules, possibly separate in the file space:


This works, but it creates extra levels of such "meta-modules": .../foo+tests, .../foo+interactive_tests, .../foo+debugging_tools+tests, etc.

You might structure it as a series of optional modules, all within a single metamodule:


or, rather


and so on.

It's still annoying to have an extra level of indirection, but the purity of the bodily fluids of .../foo is maintained.
Now, of course, it is highly likely that you will have some tests within foo and some that are assoviated with foo, but that foo won't let in:

This can be confusing, but apart from changing names it may be unavoidable: if a country has immigration control country, you often get refugee camps on the border.  The country may ignore them, but the UNHCR does not.

Some systems allow this "stuff" to be overlays within foo:

    .../foo/tests -- optional
    .../foo/stress-tests  -- optional

Thursday, January 26, 2012

Perl test brackets with deterministic finalization

I just realized that you can use deterministic finalization in Perl just like C++. Yay!
    package Brackets;
    # my first attempt to use determinsitic finalization in Perl;
    sub name {
 my $self = {};
 $self->{name} = shift @_;
 bless $self;
 print "{name}>\n";
 return $self;
    sub DESTROY {
 my $self = shift @_;
 print "{name}>\n";
Note that I use my "pseudo-XML" notation:
</TEST END name>
It's not nice to violate the standard, but IMHO this is a alot easirr to read than
<TEST context="START" name="name">
</TEST context="END" name="name">
and readability is a concern - since I plop these things in test output read by my human coworkers. Human coworkers who really don't like XML because it is so ugly. I can readily translate my pseudo XML into real XML, in case I wanted to use any real XML tools. And can do many operations without such translation. Unfortunately, there aren't many real XML tools to be used withot a lot of work :-(. I need to fix up my pseudo-XML UNIX command line tools, suitable fr use in pipezs and so on.

Wednesday, January 25, 2012

googletalkplugin issues

I started having problems with google talk, specifically the googletalkplugin that I get when clicking on "phone" in gmail. I believe I have had these before. Not sure of old workarounds have all been tried, none so far work. Started off with not being able to hear gtalk dial. Then got messages like "XXX-XXX-XXXX (a phone number) cannot be reached." Uninstalled/reinstalled. Witho rebooting. Running as Asministrator. No joy. Just noticed many googletalkplugin processes running. This is what tickled my memory. A new one starts every time I start Chrome. Doesn't always go away.

Tuesday, January 24, 2012

Code ordering, want none: perl example

I have several times posted how I find languages without code ordering pleasant, more readable.

  let x = y + z
      where z = some log and complicated expressions
Here's an example from writing "shell scripts" - although here I am doing the shell script in Perl: Start off with:
   system("command with some long and complicated command line");
Realize that you want to repeat the command in a pre-announcement of what you are doing, and in an error message: Start off with:
   print "RUNNING: command with some long and complicated command line\n";
   my $exitcode = system("command with some long and complicated command line");
   print "error: exitcode=$exitcode: command with some long and complicated command line\n"
      if there_is_a_problem($exitcode);
Now avoid repetition: Standard way:
   my $cmd = command with some long and complicated command line";
   print "RUNNING: $cmd";
   my $exitcode = system("cmd");
   print "error: exitcode=$exitcode: $cmd\n"
      if there_is_a_problem($exitcode);
In my opinion the non-code ordered way is more readable:
   my $cmd;  # maybe some declaration to say it is not ordered?
   print "RUNNING: $cmd";
   my $exitcode = system($cmd = "command with some long and complicated command line");
   print "error: exitcode=$exitcode: $cmd\n"
      if there_is_a_problem($exitcode);
I think it is more readable because you see the value set at the point where it matters, where it is used most intensely (the other uses are mild, just prints). Note that I have used a scope.

Friday, January 20, 2012

Tagging is so passe'

Tagging is so passe'.

Manually adding keywords to stuff you do.

What we need is "tag suggestion" software.  Software that looks at what you have written, compares it to a corpus - perhaps your stuff, but perhaps stuff from others - and gives you the choice.


Automatic email folder classification rules are so passe'...


Gmail's labels are so passe'.  Same reasons.

(Plus the absolute lack of structure.)


I have played around with Bayesian codes, for determining if tags or labels should apply.


Gmail's "important" filter is a step.  But more needed.  Plus, a more personal classification system.


I remember GNUS gnus-topic-mode.el  in EMACS fondly.  Realizes that at different times of the day, or in different modes, I may prioritize things differehntly.

THERE ARE NO FIXED PRIORITIES for personal information management.

My priorities when I am reading email on vacation, or in the evening at home, are different than in the day at work.


Why tag?

Why not just use search?

Tagging is a crystallization of information content.  E.g. it records the fact that, at some time, you decided that a post was about VCS, Version Control Software, even though it might not contain the phrase in a way that seach would turn up.

Tags make it easier to track [[terminology drift]].  (TBD, need to write a wiki/blog on that).

E.g. what we call now VCS (Version Control) might have been called REvision Control years ago, or CM (Configuration Management).

Terminology drifts over time.  Tags make it easier to track such drift, although even tags drift.

AAdvantage versus Account Aggregation

I just learned that American Airlines' AAdvantage frequent flyer program has sent cease and desist letters to account aggregators.

(This shows how often I check my AAdvantage miles balance - only when I am planning vacation.  It also shows exactly why I depend on an account aggregator - for all of these bleeding accounts...)

Now, perhaps I am exposing myself to hackers because I admit that I use an account aggregator.  Single point of failure, and all that.

(By the way, I would be much happier if the aggregators had read-only access to my accounts - if they could only see balances, but not change passwords.)

But the overall thing is: there are, I have, too many bleeding accounts. Too many blinking passwords.

Account aggregators are one major tool to manage this.

If a company will not let *any* account aggregator access them, well, then I do not need to be a customer of that company.

I was considering dropping American Airlines anyway, because of their financial position. This is just more incentive.


Heck, if AAdvantage was implementing better security, such as captchas, I would be happy.But sending "cease and desist" letters - that's garbage.

Thursday, January 19, 2012

Branch purpose commit message

I would still like to have a commit message that I write at the time I generate a branch, saying why I am bothering tio generate a named branch.

Wednesday, January 18, 2012

VCS thoughts

Some fast, probably cryptic, thoughts after a day merging with a "I wasn't familiar with it originally but I am painfully familiar with it now" VCS tool.

  •  Merges are branches.
    • Mercurial tracks merge workflow in a workspace, assuming that a merge will be a single commit.
    • But today I had a complicated enough merge that I started a named branch off just for it. Accomplished the merge between my tasj branch and the trunk. And then merged back from this "merge branch" to the trunk.
    • Worked fine, but it would have been nice to have some workflow tracking along the branch. Like "hg resolve", but "hg resolve" stops at the first commit boundary.
    • Had to fall back to tracking things by hand, in a text file.

  • Mercurial 's names are awkard
    • "hg revert" isn't revert
      • "hg revert -r REV file" isn't "revert".  It is "include this revision of the file in the candidate commit that you are building in your workspace.
      • hg revert corresponds to cvs update
    • "hg update" isn't update
      • "hg update -r REVorBRABCH" isn't "update". It is "switch the revision or branch that the candidate commit you are building ijn the workspace will be applied to as a child."
      • hg update corresponds to cvs checkout, although cvs is pretty sucky there too.  
      • It's rather like a rebase when you have no revisions checked in yet
    • Mercurial's branchs are not branches.
      • They are floating tags
      • Not necessarily lines of evolution.
      • What would be a better name? "Stream of development?" "Genealogy line"?
      • I'd like to have evolutionary branch0lines, as well as what Mercurial has.

  • Tags should be versioned. 
    • At least Mercurial got that right.
    • But old tag versions should be visible in the log.
    • And it should be possible to refer to an old tag version, something like "last week's official release"

  • Mercurial's anonymous branches rule
    • See, I am not purely dissing hg
    • But "hg tip" has obviously not been brought up0 to date with named branches

  • Can anyone tell me how to do the equivalent of "cvs update -j branch-base -j branch" in Mercurial?
    • Without using an external patch
    • Hint: "hg merge -r branch" doesn't work, if you have anti-patches on the branch

  • I want to be able to "pull onto a branch" or "push a branch".  Not just "push or pull all the branches in the repository".  I.e. I want branch renaming or mapping in pull and push

  • Mercurial doesn't do partial checkouts or checkins?
    • Maybe not everywhere
    • But you can screw yourself up in the same way with "hg ci incomplete list of files"
      • I'm not suggesrting denying hg ci files partial
      • But I'd like to do it in general.

  • Partial checkins and checjkourts should correspond to brancges, with merging actively encouraged.
    • I've talked about this many times.

Enough already.

"system/command" versus "system command"

A lot of tools have a master command with subcommands. For example
hg clone ...
hg merge

cvs co
cvs update

git clomne
I think that the first place I encountered this was with mh. Or, since my friend MH says that mh did not have subcommands, I may have made the following change myself for mh. Why subcommands? I think mainly to avoid name collisions in the bin. Years ago, I kluged - I think it may have been the shell, whoever processes PATH - to search not just for "executable" on the PATH, but also "dir/executable". I.e. instead of saying
hg update
I colud have said
Not much of a difference. But it makes it easier for guys to code systems that have lots of subcommands. Also, for users of shells like bash and csh: !hg/update works, whereas "!hg update" doesn't (unless your shell has tweaks.

Tuesday, January 17, 2012

Mercurial whine:
hg merge -r tip
is NOT the same as
hg merge -r default
Because Mercurial's tip may be on a branch. tip is just the most recent changeset, anywhere. I have hit bugs caused by following hg recipes that talk about using -r tip. Sigh.

Tentative checkin for branch only, not to be merged?

Here's another: Working on a branch, I just did a pull, and merged from trunk onto the beranch. I notice that somebody else has a minor bug on the trunk: it looks like they forgot to check in a test reference pattern, although it is also possible that the test has been checked in before the golden behavior is established. I report the bug. But I would like to silence the error, at least on my branch. I can make the change to the reference patterns. However, I do not know if that change is goosd. I just want to silence it, or mark it as a known error. I don't want my change to propagate back to the trunk when eventually I merge and push. I.e. my branch now contains some tentative stuff, as well as some stuff that I am confident will soon be merghed. On CVS I would create a per file branch for the reference, and work off a merged directory. Eventually, I would update to the main branch, ahnd not get5 my patched test. Unclear how to do this in Mercurial, apart from remembering that I have to delete it eventually. Oh, here's a way:
// working on branch B
hg pull
hg merge -r tip
make test
// make change to silence test error
hg branch Bb
// somehow copy old version of patched file to pre brahnch
hg ci // on Bb
hg update -r B
// now work
// when ready to merge, do the usual
hg pull
hg merge
make test
// now do the extra step to be confident that you aren't pushing anything broken
hg update -r Bb
hg merge -r B
// now should have B, except for that one change that was tenative
make test
// see if the bug was fixed by someone else...
// ok, now merge to default branch, and then push
// - since I am paranoid, I might pull and test again
hg update -r default
hg merge -r Bb
make test
hg push
Works, but is complicated. I am quite likely to forget that I am supposed to merge from branch B onto Bb before merging to the trunk. {{Category-VCS}}{{Category-hg}}
AFAICT Mercurial only allows you to merge entire changesets.

Here is an example of why I may want to do a merge at lower granularity:

  • I am working on a branch B
  • I do a pull from the parent, hg pull
  • I merge from the parent on to my branch B, because I am not yet ready to merge back onto the trunk (aka default branch)
  • I run my tests
Now I notice that the .hgignore that I got from the parent is missing a file.  I fix it in my repository, to shut it up.

Then I realize that I should push such a generic change back to the trunk asap.

What I want to do is something like"

  hg clone work-repo quick-fix
  cd quick-fix
  hg update -r default
  hg merge -r B  .hgignore     # to merge just the change to .hgignore into the trunk
  make clean;hg purge
  make test
  hg ci
  hg push
I.e. I want to merge JUST the change to .hgignore, using "hg merge -r B .hgignore". Instead I do
  hg clone work-repo quick-fix
  cd quick-fix
  hg update -r default
  cp ../work-repo/.hgignore .
  make clean;hg purge
  make test
  hg ci
  hg push
I.e. "cp ../work-repo/.hgignore ." is used instead of "hg merge -r B .hgignore". Although this works, it makes me unhappy. E.g. I may blithely have overwritten other changes, e.g. if I cloned from the master rather than the work-repo. Not to mention the fact that the "hg push" abnove would push my branch. And my teammates do not want my branch to be pushed, because it has too many fine grain checkins - they want just a single checkin message. But that's another story.

Saturday, January 14, 2012

I'd like to have a text parser, like Perl CPAN Text::ParseWords,
that *only* breaks the text into words
- but which does not transform the words, handle escape characters, etc.

For example,
      shellwords("a b 'c d' e")
   c d
i.e. it breaks the text up into words,
but it also transforms the words.

I would like to separate the breakup from the transformation:
   'c d'

Note that if you ever encounter such a list whose words can themselves be further broken up,
then you know that it has been parsed by some tool after your original parser.

[[Category:Programming]] [[Categy::Text]]

Sunday, January 08, 2012

Thumb drive as a webserver / NAS

I wish that I could find a flash drive that acted as a NAS.

Most USB flash drives are passive storage. Encryption is done by the OS you plug into.  Since I run Windows and various *IXes, and want to access my data from each, depending on a particular OS is a pain.

Booting an OS from the USB flash drive is better - but still not great, since it makes assumptions about the platform you are plugging in to. Typically, that it is a PC.

Flash drives have non-trivial processors in them.  Probably running Linux.  Why not make them a peer?

Issue: network interface.

I don't really want to add a typical Cat 5 ethernet connector - what is that, RJ45, 8P8C plug? - to the flash drive, neither in addition to nor replacing USB.

Q: I am sure there is a standard for networking over USB - but how ubiquitous is it?  I do not recall ever getting the option, when I plug in a USB flash drive, of connecting a webnrowser to a server running on tghe drive.

Friday, January 06, 2012

Gratuitous key position changes.

Was wondering why "search"  or "find" in Google chrome was broken.

Wasn't.  For the past week I was using a keyboard that was not my usual, with the keys in the lower left hand corner looking like Ctrl,Fn,...

Just switched back to what has been my usual keyboard for several months, with those keys swapped: Fn,Ctrl,...

It's funny how these trivial differences in devices are some of the most annoying.

Last time I shopped for keyboards, I bought 3 identical.  Not my favorite keyboard, but cheap enough that i could afford 3 of them.  Home, work, and Oceanside.

Yeah, yeah: remapping. Loses.

(The only good form of keyboard remapping is one that is downloaded into a PROM on the keyboard, so that it works in all modes, even before the OS boots.)

Thursday, January 05, 2012

hg schemes extension


I like the idea of this extension, that allows you to create shortcuts for URLs you use often, e.g. cloning.



ups = ssh://my-uarch-performance-simulator/...

However, it is something of a no-sale when I saw that the URL was notexpanded in the clone:

> hg clone ups: u
> cd u
> hg paths
default = ups://

Or in WorkDir/.hg/hgrc

default = ups:

PRO: when the remote repository changges location, this *might* allow you to change the scheme URL shorthand in hgrc, and have things work in the possibly multiple clones you have outstanding.

CON: it loses some documentation. If I got run over by a beer truck, my replacement might have trouble finding where it was from.  Worse, if a repo was restored from backup (particularly if people keep branches as repos), and my home directory with the ~/.hgrc enabling SchemesExtension was  not present.

I'm not sure what I would like. Possibly always expand the shorthand.  Possibly record the shorthand with the long path as a comment:
default = ups: # ssh://my-uarch-performance-simulator/...
> hg paths
default = ups:// # ssh://my-uarch-performance-simulator/...

TBD: move this to my wiki Category:VCS.

Tuesday, January 03, 2012


Gave in, worn down... Installed VirtualBox, and Ubuntu in a virtual box, on my tablet PC running Windows 7.

Stream of  consciousness...

Monday, January 02, 2012

Recovering diverged home directory version control

I have long version controlled my home directory.  CVS. Git. Hg.

Unfortunately, they diverged.  Divergence happens naturally with CVS.  You have to work hard to get git and hg to diverge, but I did.

Now I want to merge the diverged home directories back together.  Preserving the history if possible, from the different VCS.  Sometimes just merging fikes.


Today: I want to start merging a diverged linux tree, from a flash drive, with my working tree (which started off on cygwin).

I created a new hg repo, created a branch on it, and then imported the tree to be merged.

I pulled/pushed this with my main home hg repo.  Had to use -f, to force unrelated repos to be together.

Now I have a single repo, with my current working (cygwin derived, homedir, and a not working linux homedir, on different branches.  The former n the default branch.

That's okay.  Not so bad.  A single history object(although I have kept separate working space trees.)


Now I want to merge, a file or a few files at a time.

E.g. copy the README from the linux branch to the default working derived from cygwin branch.

AAARGH!!!!  Hg doesn't handle partials... neither merges, not copies, nor...  Hg just plain really wants to lose track, not make doing this activity easy.

Missing X fonts

Like http://ubuntuforums.org/showthread.php?p=11544410, I was getting errors such as

Warning: Cannot convert string "-*-courier-medium-r-*-*-*-120-*-*-*-*-iso8859-*" to type FontStruct
Warning: Cannot convert string "-*-helvetica-medium-r-*--*-120-*-*-*-*-iso8859-1" to type FontStruct

(Except that I was getting the errors, not on a fresh Ubintu install, but when trying to use a fairly new machine at work, with what is probably a newer version of redHat.)

The post I quote fixed things by setting ~/.Xdefaultsto

emacs*font: 7x14

And then doing xrdb -merge ~/.Xdefaults

On my end, I found that neither 7x14 nor courier worked.  But font "fixed" did.

An Amusing and Frustrating Anecdote about Google 2-step Verification

I like Google 2-step verification - in which you normally log in with a password, but where, if you are logging in from a new machine, etc., you "verify" your login by entering a one time code sent to your cell phone by voice or text.

I liked the idea as soon as I heard of it, but was reluctant to sign up for it because I frequently use my laptop computer in places where cell phones don't work - e.g. at the coast in Oregon, and, most recently, at my new house in Portland's hills.

I was scared that I might end up unable to log into my gmail, lacking cell phone.

Nevertheless, after reading rave reviews, I finally gave in and signed up for 2-step verification.  And have continued to use my laptop fairly successfully at the coast without cell phone coverage, since normally it has already been verified.

But over New Year's I finally tripped up:

Because of a bug with googlevoiceplugin.exe (a new copy was spawned every time I started Chrome: I had 52 copies when I realized what was happening) I uninstalled and reinstalled the plugin, and eventually Chrome itself 9since  the plugin would not uninstall while Chrome was running).

So when I tried to log back into gmail on Chrome, 2-step verification was required.  But my cell phone doesn't work at the coast.

Now, Google 2-step verification has  a backup phone numbeer, which can be a land line using voice.  But remember that I said the new house that I have just bought also lacks cell phone coverage?  Guess what my backup phone was?

And Google 2-step verification does have a backup set of one time passwords.  I know I printed them out for my wallet.  Umm...  got a new wallet, recently, smaller, and did not carry it over.

So now the fun begins:  I don't have to drive too far to receive a text message.  I'll try to login, get Google to send the verification code, drive to where I can receive a text message, drive back.

Try #1: got the message.  Actually, got several verification code messages.  Drive back, they don't work.  Perhaps I got confused, and typed the wrong verification code into the wrong box.

Try #2: I realize that I may not need to drive the few miles to the next town.  The mountain next to my house may have reception.  Drive up it, yep, received the text message.  Drive down... Nope, didn't work.

I'm beginning to think there is a timeout.

Try #3: Repeat. An hour or so later, since I had to charge my cell phone - the battery drains quickly in this area. But this time, I can't get any bars on top of the mountain.  The fog has moved in and the sun has set, affecting signal strength.
So I drive to the next town. Signal, but no text message.  I wait ten minutes, start driving back... and the message arrives while I am driving.  Have I mentioned that AT&T Wireless has occasionally taken >4 hours to deliver text messages?  20 minutes is par for the course.

Doesn't work.  I', getting pretty sure there is a timeout.

Try #4: This time I request the verification code, drive out, and call back to ask my wife to enter it.
However, my laptop has gone into power saving mode, and although I disabled the power-on password, by wife doesn't realize what has happened, and tries to use  the second computer sitting next to the laptop that needs the verification code.

2 days later we try again: my wife drives to the next town with my cell phone.  She calls me back when she gets a signal.  I request the verification code. She waits a minute or so - fortunately, this morning AT&T is fast - and reads it back to me over the phone. I enter the code, and all is well.

I change my backup hone number to the landline at the coast. Realizing that this will needto be changed again when I get back to Portland.

And I write the backup passwords down by hand.