The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Thursday, March 22, 2012

Personal versus Version Control Log

I keep a log. ~/LOG. (Actually, I really need to keep a personal ~/LOG and ~/work/LOG, since the latter may belong to company. But thats another issue.) There is redundancy between my personal LOG and the log messages checked into a version control log. Like Mercurial, CVS, RCS, SVN, git. I often find myself writing similar messages into both mer version control log and my personal log. Cutting and pasting when I remember to. (Unfortunately, hg's integration with emacs' VC mode makes the cut and pasting a bit harder, since it removes the buffer in which the log message was being written as soon as checked in. Old RCS mode was more convenient, since it just pushed that buffer down in the stack, and I could easily go back and get the text. Not so hard to do an hg log and then cut5 and paste, but often too much hassle.) This is an example of the need for a multicast. Just like I sometimes want text to go to botyh my blog and my wiki, and USEnet comp.arch, and ... I also sometims want what I have written to go to version control log and personal ~/LOG (and maybe work ~/work/LOG, and ...) Cut and paste is okay if I weren't doing it so much. But since I do it so much I want something more convenient. How can I say it is tioo much hassle: because I often don't propagate stuff to all the logs I should.

Expressing fetch-old-and-do-new in a High Level Language

Quite often I find myself wanting to test the old value of some variable, using that to go into an IF, while then or soon thereafter setting a new value.
For example:

  our $titles_already_printed;
  sub print_titles {
     if( ! $titles_already_printed ) {
         print "TITLES...";
     $titles_already_printed = 1;

This doesn't look so bad if it is a simple variable:

  our $f;
  sub print_titles {
     if( ! $f ) {
         print "TITLES...";
     $f = 1;

But it can be painful if the flag variable iws accessed via a long and complicated path:

  our %titles_already_printed;
  sub print_titles {
     my $titles = shift;
     if( ! $titles_already_printed{$titles} ) {
         print "TITLES...";
     $$titles_already_printed{$titles} = 1;

And there can be more than one way of expessuing it:

  our %titles_already_printed;
  sub print_titles {
     my $titles = shift;
     if( $titles_already_printed{$titles} ) {
     print "TITLES...";
     $$titles_already_printed{$titles} = 1;
The return form is convenient for functions - but only if it is correct to return. I have seen quite a few bugs caused by such returns, when there is later code (perhaps added later) that should not be skipped by the return. The first form is convenient for straight line code, not wrapped in a function. It is annoying that refactoring code from straight line to function may sometimes require changing its control flow structure. Here are some other forms: First, separating the flag manipulation from the action:
  our %titles_already_printed;
  sub print_titles {
     my $titles = shift;

     my $old_resukt = exists $titles_already_printed{$titles};
     $titles_already_printed{$titles} = 1;
    if( $old_result ) {
     print "TITLES...";
You can probably see where I am going. Would it not be nice to not have the repetition? Something like
  our %titles_already_printed;
  sub print_titles {
     my $titles = shift;

     my $old_result = 
        lambda_define_and_invoke_here( NAME = $titles_already_printed{$titles} ) {
            return_value = exists name;
            NAME = 1;
    if( $old_result ) {
     print "TITLES...";
Purely syntactic sugar, but...
  • lambda_define_and_invoke_here defines a lambda function, a code block, in part to get parameterization so a shiort NAME can be used. It is defined and used immediatekly, so that we don't have to repeat ourselves by defining it associated with a name, and then immediately invoking that lambda name.
  • Something like Jensen's device, name substitution, is used for NAME. In this example because $titles_already_printed{$titles} doesn't exist at the time of invocation. If that weren't an issue, a non-Jensen's device lvalue could be used. Extend that to a special lvalue for $titles_already_printed{$titles}, that creates $titles_already_printed{$titles} if it doesn't already exist and is later assigned to. An "potential lvalue for an ordinary lvalue that may not exist yet".
  • return_value used so that we don't have to obscure things by creating a tmp for the old value.
This is purely syntactic sugar, but... I see this code a lot. But it is often slightly different. The purpose of an HLL is to make expresion easier and less error prone. Note: I am NOT talking about fetch-and-op or compare-and-swap for purposes of parallel programing synchronization - although obviously related. And, bonus if idioms expressed this way could be automatically translated to hardware atomic RMWs.

Wednesday, March 21, 2012

Focus on the focus window

I wonder if I should code stuff to bring the window I am working on the the center of my field of view, while moving other windows to the side. Again, related to more screens?

Shells tightly bound to directory?

I am noticing a pattern: I have long used emacs shell windows. But for some reasion more and more I am creating shell windows that are intended to be tightly bound to their directory. Whose name is their directiry. Where it gets confusing if I chdir in them. In fact, have had errors when I quickly type keystrokes to switch to a shell window command expecting to be in dir PROBLEM: had changed away from dir, deliberately or by accident I am beginning to wonder about DISABLING chdir in that sort of shell window. Or at least making it a bit more onerous. Why this is happening more now than in the past, I dunno. Possibly because I have more screens. --- Similar happens for xterms, but I use xterms much less often than I use emacs shell windows.

Understanding Exogenous shocks to Startups and Conditional Prophecy


    I remember first observing this when I worked at Bessemer. For example, there was a startup that supplied services to video websites. For years, the company soldiered along, barely growing. Then, suddenly, YouTube blew up and this company took off along with it. 

    As a founder, these exogenous shocks are out of your control, but you can 1) understand what exogenous shocks you depend on, 2) try to guess when those shocks will hit, 3) manage your runway so you survive long enough for them to hit.

This sounds very much related to my classification of prophecy or prediction:

There are 4 types of prophets:

(1) False prophets who are always or usually wrong, or are right by random chance. These are common.

(2) Accurate prophets, who can predict exactly what is going to happen, exactly when it is going to happen.  These are exceedingly rare.

(3) Eventual prophets, whose predictions, oft repeated, almost always eventually come true.  Here are some from the past, and some from the near future: "Computers will fit on your desk".  "Tablet computers will one day be popular." "Some day, computers will be worn, not carried, and will display on your glasses, contact lenses, directly onto the back of your eyes, or direct neural connect". Nobody with a clue will be surprised by the last set of eventual predictions coming true: the problem is knowing when, and how.  How to make it happen, and how to profit by it happening.

The final class of prophet is, I think, the most interesting, because it is sometimes attainable and more useful than anything except accurate prophecy:

(4) Conditional or contingent prophets:  they may not be able to predict exactly when something will happen, but they can make predictions about dependency graphs of events.  For example, referring to Chris Dixon's blog post "If internet video ever takes off (like YouTube eventually did) then there will be a market for video services for websites."  Or let me try: "Contact lens displays may take off (a) when nearly transparent pixels dense enough to be circa 1Mp on the size of a contact lens can be achieved, (b) when signals (whether by wire or non-wire (not necessarily electromagnetic wireless) can draw (not refresh) such pixels on a device, and (c) when a standby device <0.1W powered by energy harvesting can always run, and wake up a more power huingry device when needed).

When I helped start Intel's Microprocessor Research Lab I created several such dependency graphs.  Note: not just simple chains, since there may be more than one way at arriving at a result.

Sunday, March 18, 2012

Sharing Kindles in a Family

We want to share Kindles between the members of our family - so that we can treat them as a pool of devices of different form factors, ranging from small through large.

For that matter, we often read the same books.

It looks like the only way to do this sharing is to share a Kindle account.

Since I don't want to have a credit card attached to the account my teenager logs in to, I create a second account - one associated with the Kindle, the other for purchasing books.  I can then use "give a book" to, well, give a book from the CC account to the K account.

For that matter, my wife and daughter both have their own Amazon accounts. Makes gift card management easier.  Now we must remember to give books to the K account.

Lossage: subscriptions need a credit card account.  I may arrange a card with very low limit.

If at some time we no lionger need to swap Kindles of various sizes, we may rebind.


It is annoying that the Kindle does not allow multiple accounts. In this, it is like the early DOS and Windows PCs, who could not understand that multiple different users might want to use the same device.

Saturday, March 17, 2012

As a result of my move, I am losing the NAT router w. wifi that I have been using for the past few years.

At my new house Comcast is just giving me a cable modem.  (I may try to swap boxes, but they probably won't let me get away with it.) (This surprised me: I did not know there were naked cable modems available any more.)

So, I am shopping.  Any recommendations?

I know: I should just take one of my old machines and install the latest flavor of open source router.  But (1) I'm lazy, and just want this to work, and (2)  I don't have any machine younger than 2000 with slots for a second network card, and don't feel like shopping for USB NICs and ... and ...  (As I have mentioned before, I went almost a decade without buying a desktop PC, laptops only, although I did give in and buy a cheap desktop last year.)

The usual shopping list:


* wifi - nice, but I have some old wireless routers that I can use.  still, it's nice to9 have just one box. (For various reasons, I don't want to use those wireless routers to connect to the net.)

Since I am shopping around, I might as well look for something that is on my wishlist:  stats.

Basically, I would like the router to keep a reasonable amount of local stats.  Enough so that I can document what percentage of the time I and what percentage of bandwidth up and down I spend VPNed to MIPS, so that I can at least deduct that from my income for tax purposes. (Perhaps if it's a high enough percentage, I can pay for more bandwidth, and get better Skype to you, Mike.)

Not necessarily a full connection log, although that would be nice if there was enough storage.

But at least 
* total packets/bytes up/down
* total packets/bytes to a few IP addresses I identify
* total packets/bytes by IP for some selection of the top few IP addresses (not explicitly identifed)
* total 1minute time intervals in which there is any traffic, and/or any traffic to those IPs identified above
* histogram of those intervals by time of day and day of week

With reasonable ability to download automatically, so that I can keep the stats on another machine.

(Lower priority: VPN in the router: I have thought about tunnelling everything from my house router to Dreamhost for 20$ a month. Mainly so that I also get VPN that I can use while travelling.  VPN in the router attractive so I don't need to configure it on Sophie and Rhonda's machines, etc.)

Thursday, March 15, 2012

attributes on closing tag (pseudo-XML)

Here's an example of why I like having the ability to place attributes on closing tags in pseudo-XML.  Proper XML only allow attributes on opening tags.

I like using pseudo-XML to structure the output of a test suite. For example:

<test start "foo" >
<test start "bar">
<test "1" passed>
</test end "bar" passed=10 failed=0>
<test start "baz">
</test end "baz" passed=13 failed=2>
</test end "foo" passed=13 failed=2>

More nicely indented:
<test start "foo" >
    <test start "bar">
        <test "1" passed>
    </test end "bar" passed=10 failed=0>
    <test start "baz">
    </test end "baz" passed=13 failed=2>
</test end "foo" passed=13 failed=2>

Anyway, the basic idea is to use pseudo XML ...s to encapsulate test output.

When I translate from pseudo XML to real XML, I can use any of several nice viewers that allow sections to be opened and closed.  E.g. Internet Explorer.

Placing attributes on the closing tag allows me to do such processing in a single pass.  E.g. in a UNIX pipeline:

run-test-suite | collate-results

This is one of the reasons I like being able to place attributes on closing tags: it allows single pass processing.  Often, I am processing some other program's output.

I am fine on eventually converting to standard XML. If necedssary.


By the way, I do pseudo-XML, like

<test "name" passed assertions=56>

mainly because I find it more human readable than

<test name=-"name" passed="true" assertions="56">

Human readability counts, since there is not such a plethora of XML tools on the UNIX command line that I never look at the raw text.

ISO composite workspaces

Here's the thing that I really miss in the Mercurial based version control I am currently using; i.e. here is something that CVS used to do really well, that Mercurial makes a pain.

Composite workspaces.

Here's the output of a shell scruipt that goes and looks at 4 of the hg repos/clone/workspaces that I am normally working in:

        hg root: /home/glew
        hg root: /home/glew/work/work.mips
M tools/ag-mips-automation/ag-mips-automation.pl
        hg root: /home/glew/work/work.mips/uarch-perf-sim/psim+stuff
M psim-extra-stuff/diag_instruction_list_including_undefined_instructions.s
M psim-extra-stuff/parse-rtl.trace-or-ref.trace
        hg root: /home/imp_scratch_perf/glew

It's not too bad - I've only made coordinated changes in two repos.  Sometimes I have made changes in 4 or 5, that are all coordinated in some sense.

Back in my CVS days I could build a composite workspace, and check them all in at the same time, with a single command and checkin message. I could even tag them together.

In Mercurial, I have to go to each repo and check in separately.

Don't say "subrepos": I am already using Mercurial subrepos: psim+stuff is actually a super-repo with 2 subrepos.  Subrepos get part of the way there, but are nowhere as easy and convenient to use as the compositeworkspaces I could set up with CVS.


I am reasonably certain that the above excerpt is non sensitive: it's pretty well known that I work for MIPS, on performance simulators, and with RTL.  I sanitized any product names.


One big reason to want composite checkins: I work better when I am keeping track in a log.  E.g. ~/LOG -> ~/mips/LOG.

(Let's skip over issues about why UNIX text files are lousy for logs.)

So nearly always I want to check in both my LOG, and my project work, at the same time.  But they are in different repos.


I don't make my project work a subrepo of my home directory, ~, or of my company work area under my home directory, ~/work or ~/work/work.mips, because I frequently create new ad-hoc clones, i.e. task branches.  It is a pain to have to edit Mercurial's subrepo file whenever I want an ad-hoc branch.

The nice thing about CVS composite workspaces was that, in some existing workspace, I could just say "cvs -d SOME-OTHER-CVSROOT co repo"  and it just worked.  I.e. connecting an ad hoc subrepo trivial.

In an ideal world, I might want to make the distinction

(1) clone into a new subdirectory of an existing workspace, implicitly making it an ad hoc subrepo for the workspace checking out into

(2) clone, without linking into the enclosing workspace.


Almost what I want:


# Script to run Mercurial commands on tghe usual places I run them

for i in  ~/. ~/mips/. ~/mips/uarch-perf-sim/psim+stuff/. ~/mips/rtl/. ~/mips/rtl/current/.
echo $i
hg --cwd $i root 2>&1 | sed -e 's/^/        hg root: /'
hg --cwd $i "$@" 2>&1 | sed -e 's/^/        /'

although "hgu ci" does not automatically copy checkin message between repos.

Wednesday, March 14, 2012

Shells tightly bound to directory.

I'm usually the guy who wants more. More features, more flexibility.

My rule is "first implement all reasonable features in an orthogonal manner. Then choose the defaults."

E.g. "provide a command line switch for all ioptions. But then provide a reasonable default, wghich may corresponsd to a set of switches, or a reasonabe set of switches for various default combinations".


However, more and more I am thinking that my present usage model might benefot from LESS flexibility in the Unix shell.

I live in emacs.  Many shell windows, each typically named for the directory they are in.

It's a pain when I chdir away from that directory inside such a window.  Occasionally I make errors.

I think that I might like to REMOVE the ability to chdir in such a shell.

(Can do this via aliases? Impefect.)

Monday, March 12, 2012

X font path problems

Can you see what has been annoying me today?


I would Google +1 this - if I could figure out how/where my +1 button got to in my Chrome browser.

A perfunctory attempt to google fails to find out how to enable +1 in my browser.  I cannot even remember if +1 works in your browser for sites that don't have their own +1 buttons.

Out of Date Installations

I have wasted so much work time discovering and working around problems that occur on old installations (that I happen to be given to work on) but which are fixed on more recent updates to the system software.

Thursday, March 08, 2012

Different device inputs => different passwords, different writing styles

I am probably not the only one to notice that the reasonably secure passwords I type fairly easily on my PC, or even with handwriting recognition on my Windows tablet, can be really painful to type on an iPhone or tablet with onscreen keyboard.

? Will there be a rash of security breakins because smartphone and tablet passwords are too easily broken?  Or at least the subset of easy typing common to both PCs and portable devices?

Today added a new device: a Kindle with keyboard.  Not the latest and greatest, but okay. I love the battery life.  I can access my wiki using the experimental web browser. But the keyboard... sheesh, I force myself to write in a different style. "Zero" rather than "0".

Still, nice when its the only device I have.

(And, yes, I have 2 Android tablets - and I prefer the Kindle. e-Paper. Battery life.)

Good programmers generate good error messages

On the list of my favorite things - NOT:

Programs that respond to user input errors with crashes and stack dumps.  Whether the stackdump is machine code, Perl, Python or whatever...

I could say something like "Good programmers write code that runs with legal input.  Better programmers write code that gives good error messages and/or otherwise handles illegal input better."

Except that it is as that programmers have to write so much error handling code.  Error handling code can grow to take up more space than the real code.

I like language systems that make it easier to write error handling code.  Not necessarily user friendly error messages, but error messages that give the user more of a chance tio figure out what went wrong.

this is one thing I like about C++ exceptions: throw a string-like error message.  When caught, add more info to the string, and throw again, until you get to the outpmost level whedre you die - oer some intermediate level where you can die or otherwise handle.

But I can here the dweebies say "You aren't supposed to throw string or char* exceptions".  True - in an "out-of-memory exception.  But except for that, concatenating or stacking string exception error messages are some of the simplest and best ways to report errors.  So make the exception system handle it.

Similarly, this sounds just like a stack dump.  But it is a user intelligible stack dump - some thought can go in to the strings that get thrown.

Tuesday, March 06, 2012

malloc and transactyional memory (TM)


Dynamic memory allocation such as malloc is a classic example of why one may want memory accesses that can leave a transaxction.

Consider a simple mallocator, that allocates from a single queue or arena.
This might mean that multiple transactions, otherwise completely independent
might appear to interfere with each other through malloc.

(This interference might take several forms.
E.g. if the transactions are first performed totally independently, they may allocate the same memory.
E.g. or if the transactions malloc separate areas, but sequentially dependent, rollback of one might require rollback of the other.
(Although even this is an advanced TM topic, if the footprints overlap.))

Certainly, there can be malloc algorithms that minimize this - e.g. multiple areas, increasing the chance that different transactions might not interfere.

Or... the topic of this page:  permit malloc to exit the transaction
- so the [[parallel or concurrent]] mallocs may be properly synchronized, and receive independent memory addresses.

Q: what happens when a transaction aborts?
* Possibly could free the memory in the abort handler.  But this becomes less hardware, and more software TM.  Or at least a hybrid.
* Let garbage collection recover the memory speculative allocated within an aborted transaction.

(Q: what about [[deterministic finalization and aborted transactions]]?)

= Related Topics =

* [[speculative multithreading and memory allocation]]:
** [[malloc and speculative multithreading]]
** [[stack allocation and speculative multithreading]]

Pseudo-atomic - atomic operations that can fail


[[Pseudo-atomic]] is a term that I (Glew) have coined to refer to atomic operations that can fail to be atomic, such as:
* [[Load-linked/store-conditional (LL/SC)]]
* even [[hardware transactional memory]]
** such as [[Intel Transactional Synchronization Extensions (TSX)]]'s [[Restricted Transactional Memory (RTM)]], where "the [[XBEGIN]] instruction takes an operand that provides a relative offset to the fallback instruction address if the RTM region could not be successfully executed transactionally."

Q: what does IBM transactional memory provide? Is it pseudo-atomic, or does it provide guarantees?

I.e. these [[pseudo-atomic]] operations do not guarantee completion.
At least not at the instruction level.
While it is possible to imagine implementations that detect use of pseudo-atomic instruction sequences
and provide guarantees that certain code sequences will eventually complete.
such mechanisms are
(1) not necessarily architectural
and (2) more complicated that non-pseudo[-atomic instructions.

E.g. for [[LL/SC]] hardware could "pick up" the instructions that lie between the load-linked and the store-conditional,
and map them onto a vocabulary of atomic instructions such as [[fetch-and-op]] that is supported by the memory subsystem.
Similarly, [[LL/SC]] might be implemented using [[transactional memory (TM)]].

Intel's TSX documentation says

    RTM instructions do not have any data memory location associated with them. While
    the hardware provides no guarantees as to whether an RTM region will ever successfully
    commit transactionally, most transactions that follow the recommended guidelines
    (See Section 8.3.8) are expected to successfully commit transactionally.
    However, programmers must always provide an alternative code sequence in the fallback
    path to guarantee forward progress. This may be as simple as acquiring a lock
    and executing the specified code region non-transactionally. Further, a transaction
    that always aborts on a given implementation may complete transactionally on a
    future implementation. Therefore, programmers must ensure the code paths for the
    transactional region and the alternative code sequence are functionally tested.

GLEW OPINION: requiring alternate code paths has historically been a bad idea.
E.g. [[Intel Intanium ALAT]]. Now [[RTM (Restricted Transactional Memory)]]

Why, then, provide pseudo-atomicity?

* Pseudo-atomic operations allow complicated atomic operations to be built up out of simpler
* Plus, of course, it is easier than providing real atomicity.  Most of the time it works.  Most of the time may be good enough for many people, who may not care if it occasionally crashes when the seldom used alternate path is exercised.

New(ish) IBM z196 synchronization instructions


= New IBM z196 synchronization instructions =

The IBM z196 adds new [[synchronization instructions]] to the [[IBM System z Mainframe ISA]].

Augmenting the legacy instructions

The reference [share] comments "there is no need for a COMPARE AND SWAP loop to perform these operations!"
- their exclamation mark!
This suggests the motivation for these instructions
- while [[compare and swap]] is one of the most powerful synchronization instructions,
it is not necessarily efficient.
Atomic operations such as [[atomic add to memory]] can perform in one instruction,
without a loop, things that would require looping for [[compare and swap]], [[test-and-set]], and [[load-linked store-conditional]].
Looping that may require special mechanisms to guarantee forward progress.

The z196 [[interlocked-access facility]] instructions include

New atomic instructions:

    The "LOAD AND ..." part of these instructions' names is a bit misleading.
    These are really just [[fetch-and-op]] instructions to memory.
    Fetching the old value, performing the operation with a register input,
    storing the new value so produced,
    and returning the old value in a register.

    Flavours: add signed/unsigned, logical and/or/xor, 32/64 bits wide.

An interesting instruction that I feel must be called [[pseudo-atomic]], in much the same way [[LL/SC]] is [[pseudo-atomic]]:

    [[LOAD PAIR DISJOINT]] loads two separate, non-contiguous, memory locations (each of which must be naturally aligned),
    into an [[even/odd register pair]].
    It sets condition codes to indicate whether the fetch was atomic,
    or whether some other CPU or channel managed to sneak in a store between them.
    (The language in the [share] presentation suggests that the condition codes are only set if a store actually occurred,
    not "may have occurred" - since if thye latter, a correct implementation might always returned "failed to be atomic".)

    GLEW COMMENT: although undoubtedly most of the time such actions are atomic, it is not clear to me that there is any guarantee of forward progress,
    for a loop around [[LOAD PAIR DISJOINT]] that spins until atomicity is observed.

    I almost suspect that there may be plans to eventually provide some such guarantees,
    but that the detail of such guarantees does not want to be cast in concrete architecture at the time of introduction.

In addition, the existing instructions that perform [[add immediate to memory]], in signed/unsigned and 32b/64b forms, are declared to be atomic
when "the storage operand is aligned on an [[integral boundary]]" - IBM speak for [[natural alignment]].

    GLEW COMMENT: it is obviously desirable to be able to have a fetch-and-add immediate to memory, to avoid having to blow a register on the operand for the atomic modification.

    It is a bit unusual to be able to extend an existing instruction in this way.  If these existing instructions were in wide use, one would expect existing code to be somewhat slowed down.
    However (1) I suspect the trend towards simpler OOO instructions has already made these "add to memory" instructions slower,
    while on the other hand (2) advanced implementations make such atomic instructions neglibly slower than their non-atomic versions.

= [[Atomicity is always relative]] =

IBM literature says: "as observed by other CPUs and the channel subsystem, ... appear to occur atomically".

IBM also uses the phrase "block-concurrent interlocked update".  "Block concurrent" is a special IBM term related to memory ordering, that says that all bytes are accessed atomically as observed by other CPUs. However, they may be observed a byte at a time by channel programs... but "interlocked" means that channel program requests cannot be inserted between the bytes.
Bottom line: atomixc wrt CPUs and I/O channels.

= Reference =

Many references, scattered.

: New CPU Facilities in the IBM zEnterprise 196, share.confex.com, 4 August 2010, http://share.confex.com/share/115/webprogram/Handout/Session7034/New%20CPU%20Facilities%20in%20the%20z196.pdf

Sounds like a Fraudster Phone Call

I just received a phone call whose caller ID says "Credit Service", 11:59am, 1-701-661-1003.

Recorded message saying something like "This is your credit card company." Note: they did not say what company, just a generic phrase like "your credit card company".

Going on "There is no reason to be alarmed. You are eligible for a special reduction in interest rate to 6.9%. You must act quickly, because this special offer expires soon. Press 1 if you want to receive this special lower interest rate."

When I pressed 1, after some hold music, eventually I got what sounded like a human. He said something that again sounded, to my recollection, like "This is the charge department".

At this point I said "Hold on, you guys cold called me, so I need you to tell me what company you are, and how I can verify..." And they hung up.


Now, I must be careful, since I am reporting the exact caller ID as reported by my phone, including the phone number, but since my notes above as to what they and I said are only an approximate recollection.

I don't record all of my telephone calls. At least, not at this time.

If this was a legitimate business call, at the very least they exhibited bad customer service.

However, this is also exactly the sort of thing that a fraud operation might do: try to fool people into giving out account numbers over the phone, etc.


I wonder if there are an police systems to report this sort of thing to.

Oregon: http://www.doj.state.or.us/finfraud/engexplanation.shtml

Wish there was a national service.  FBI has a webpage, but looks like it is for actual fraud only, not suspicion.

Monday, March 05, 2012

Warnings, variables, and unnested brackets

I have long been fascinated by improperly nested bracketed constructs, such as [ ( ] ).

Today I ran into an example of a situation that might warrant such improper nesting. I decided to clean up a Perl script, converting it to 'use strict'.  Along the way I enabled warnings using Perl's lexically scoped 'use warnings' facility. In places I had to disable warnings, to make the code compile and run with few enough changes.

And so I encountered:

if( ... ) {
   my $m = an expression with an ignorable warning;
I want to disable the warning for the smallest possible region, but
if( ... ) {
   { no warnings 'type';
   my $m = an expression with an ignorable warning;
restricts the scope of both the warning disable (good), but also the variable (nad). Whereas letting the warning be disabled until the end of the enclosing lexical scope
if( ... ) {
   no warnings 'type';
   my $m = an expression with an ignorable warning;
disables the warning for too large a region. What you want is
if( ... ) {
   <disable-warnings 'type'>
   <variable-scope 'm'>
   my $m = an expression with an ignorable warning;

Sunday, March 04, 2012

IBM z196 high word facility


== Discussion ==

The [[high-word facility]] is somewhat intermediate between [[overlapping registers]] and [[extending registers]].

In terms of dataflow, assuredly a large, 64 bit, register contains at least two separate pieces whose dataflow must be tracked.
This means that a natural OOO implementation of the 16 64-bit registers would have to track 32 different 32 bit words.
Writing to a 64 bit register would update the renaming pointers for both halves, etc.
(Note that I say "natural". Alternate implementations are possible, e.g. at the cost of [[partuial register stalls]].)

However, the [[high-word facility]] uses the high word
without increasing the size of the register number field in the instruction encoding.

IBM's own literature implies that it was led to do this
because 16 GPRs was not enough.

The question is, whether one should consider something like the high word facility
- generalized, perhaps, to not just be high word, but to allow access to different parts of overlapped registers,
e.g. 32 bit scalars extended to 128 bit SIMD packed vector registers.

One might argue that, while 16 registers is not enough, 32 or 64 is more than enough.  So why bothedr?

Note that there are two levels. Level 0 is simply to provide accessors - e.g. access the X Y, Z, or W channels of a 128 buit wide quanity.
Level 1 provides operations, not just accessors.

== High word facility ==

The IBM z196 processor added the [[high-word facility]] or [[high word extension]]
to the [[IBM System z Mainframe ISA]], circa 2010.

Since its introduction in 1964, System 360 and all of its successors provided 16 general purpose registers.
Many programs were constrained by this rather small number of registers.

When the registers were extended to 64 bits, the upper 32 bits of the 64 bit registers became underused.
(In IBM parlance, bits 0-31 of the GPRs are the upper half, the most significant, and bits 32-63 are the lower half, the last significant.)

Many programs only needed 32 bit instructions and addresses;
indeed, many programs were limited to 32 bit addresses.
And even programs that use 64 bit instructions and addresses do not use them everywhere.
E.g. in C parlance, there may be 32 bit ints in a 64 bit register.
(Also, for that matter, 16 bit shorts and 8 bit chars or bytes.)

But even these 32 bit programs can benefit from an extra 16 32-bit in terms of register storage,
in the upper halves of the 64 bit registers.

So, this is what the [[high-word facility]] does: it provides a limited set of instructions that use the upper 32 bit halves of the 64 bit registers.

== List of Instructions ==

** ADD HIGH, [[AHHHR]]: r_hi = r_hi + r_hi
** ADD HIGH, [[AHHLR]]: r_hi = r_hi + r_lo
*** Presenter comments "should perhaps be called ADD HIGH AND LOW".
** ADD HIGH IMMEDIATE, [[AIH]]: r_hi += imm32
** ADD LOGICAL HIGH, [[ALHHHR]]: r_hi = r_hi + r_hi
** ADD LOGICAL HIGH, [[ALHHLR]]: r_hi = r_hi + r_lo
** ADD LOGICAL WITH SIGNED IMMEDIATE HIGH, [[ALSIHN]]: r_hi = -r_hi(self) + imm32
***  [[ALSIHN]] is like [[ALSIH]], but is an example of [[instructions that do not change condition codes]].

* BRANCH RELATIVE ON COUNT HIGH, [[BRCTH]]: if( --r_hi) goto target

** COMPARE HIGH, [[CHHR]]: r_hi ? r_hi
** COMPARE HIGH, [[CHLR]]: r_hi ? r_lo
** COMPARE HIGH, [[CHF]]: r_hi ? S20
*** F/S20 is a storage operand, i.e. in memory specified by an addressing mode.
** COMPARE IMMEDIATE HIGH, [[CIH]]: r_hi ? imm32

* COMPARE LOGICAL HIGH: in HH, HL, HF and IH flavors

* LOADs:
** LOAD BYTE HIGH (signed/unsigned (LOGICAL))
** LOAD HALFWORD HIGH (signed/unsigned (LOGICAL))

** flavours to add 32 to bit indices

* STORE HIGH - 8 16, 32

* SUBTRACT HIGH: signed/unsigned, HHH and HHL flavors

The IBM slideset that I got this from says

    "Also note that, of necessity, certain characters in the mnemonics have become a bit overloaded. The
    rookie programmer will likely find using the high-word facility challenging. We hope the benefits will
    be worth it."

GLEW COMMENT: it is sad when mnemonics become an obstacle to use.

== References ==

    Many references, scattered.
    * New CPU Facilities in the IBM zEnterprise 196, share.confex.com, 4 August 2010, http://share.confex.com/share/115/webprogram/Handout/Session7034/New%20CPU%20Facilities%20in%20the%20z196.pdf
    * IBM z/Architecture Principles of Operation (SA22-7832-08), August 2010, http://www.ibm.com/servers/resourcelink/lib03010.nsf/0/B9DE5F05A9D57819852571C500428F9A/$File/SA22-7832-08.pdf (requires registration)

Regional Pricing

Interesting observation about regional pricing differentials:

I am purchasing bookshelves.  Google Ikea, found yellow and green Billy bookshelves with glass doors for 79.99$.

However, in Portland they are being sold for 99$, with advertising that pushes them as University of Oregon "Ducks" colors.

Same part number. Not available for website purchase.

Non-painted equivalent bookcases - birch, black, brown - are 249.99$

Called Ikea. Confirmed that they are for sale everywhere except Oregon at 79.99$. E.g. Seattle.  But 99$ in Portland. Product being closed out by April 1st. Fellow I talked to on the phone says that he has never seen a local price higher than the website price.  But We can guess what is happening here: an Ikea store manager seeking to take advantage of Oregon Ducks fans possibly being more willing to pay a premium.  Or, conversely: somebody tried to make an Oregon Ducks special, and is now closing it out, with an especially large discount outside of Oregon.

Myself, I did not go the UofO, so I would not pay a premium.  I'm just looking for a good deal on glass fronted bookshelves

Marketing classes teach about price discrimination.  It's interesting to see it in practice, as a consumer.

... Woah, prices for this family of bookcases are falling all across Ikea's webpages, since I looked at them last night.

Friday, March 02, 2012

"Panic Room" extension => better persistent tab manager

I used to use several tab extensions for FireFox: multiple lines of tabs, tree structured tabs, etc.  Like  Steve Gibson, of the Security Now podcast, I used these as my "get back to" or "to read later" list.   I gave up many of these when I started using Chrome.  I liked these extensions, but I always found their usage limited by lack of persistence.

Now, most browsers nowadays have the ability to restore the last set of tabs you had open. This helps... but then I'm back to needing a better tab manager.

Plus, I often use several browser windows, because I usually have 3 or 4 displays open.  I typically have 2-6 browser windows, each of which may have many tabs - often 6-12, sometimes many more.  (I'm limited here by Chrome's lousy tab management.)

Googling around, I found a reference on http://www.google.com/support/forum/p/Chrome/thread?tid=4a2a2c76b096e33f&hl=en to the "Panic Button" extension.  For Chrome, but I have also seen similar for FreFix.  I always steered away because they seemed to be oriented towards people doing naughty stuff that they want to hide when the boss drops by.

Maybe so, but this Panic Button extension also provides useful tab management, in combination with its "Panic Room" sidekick.  It saves the current set of tabs into a folder (uniqified by time and date) in my bookmarks. Although I am somewhat embarassed by the names "Panic Button and "Panic Room", the folders can be renamed and moved around. I can save the current set of tabs in each of my windows separately, and easily switch between several contexts and modes. They are persistent and reusable, because I am using Google sync.

E.g. I currently have saved tab sets for manuals and wiki at work, new IBM mainframes, and personal stuff like my wiki and blog.


  • Save/restore tab configurations for several windows with several tabs apiece.
  • Name other than "Panic Room"
I get worried using any extensions wrt security.