Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Sunday, June 26, 2016

Apple iCloud Family Sharing: "family" = "shared credit card"

Apple iCloud Family Sharing sounds good. Especially with the Ask to Buy feature, which by default allows purchases by children to be confirmed.

But what about purchases for family members after they turn 18?:
"turn Ask to Buy on or off for family members who are under 18* in your family group. If you turn off Ask to Buy for a family member after they turn 18,* you can't turn it on again."
As a parent of a soon-to-be-18-year-old-going-off-to-college this matters to me. ;-}

While I might *like* to start off by letting my progeny-at-college make purchases, I definitely want the ability to be able to revoke such permission if they prove irresponsible.  Not that *my* child would ever max out her parents' credit card, but one hears stories...  :-)

But this "you can't turn it off again after age 18" feature means that, if you are going to withdraw permission, then you might suddenly have to purchase new licenses/copies of things that have hitherto been shared.

Worse - my normal way of handling issues like this is to use a virtual credit card number with a low credit limit. But it appears that iCloud Family Sharing uses the same credit card for all Apple App Store purchases for all members of the family. So if I use my standard trick to limit my learning-to-manage-finances-kid from overspending, my wife and I also end up limited in my app store purchases.

More: my wife and I typically use separate credit cards, to track who is spending what.   This also appears to not be viable with Apple iCloud Family Sharing.  (And, for that matter, with most other "family sharing plans", like Amazon's Kindle.)


** The Limit of Family = Shared Credit Card

In an ideal world, each participant in such a sharing plan would have their own credit.  There might, in addition, be shared credit, or "credit that you have to get approval to use".  And they could share items independent of whose credit was used to purchase them.

Notice that I said "such a sharing plan" in the previous paragraph, not "such a FAMILY sharing plan".

I am guessing that the problem with this, from the point of view of companies like Apple and Amazon, is that it would be hard to restrict such a plan to legal families.  It would be easy for groups of friends to set up such a sharing plan. Possibly to lie about being related.  And increased sharing would reduce sales.

So, companies like Apple and Amazon, rather than letting themselves get bogged down in questions of what is or is not a family, have simply adopted the rule of thumb that sharing a credit card is their definition of family.   If you trust a dozen of your friends to share your credit card, you're a family.

Or, rather: companies like Apple and Amazon don't really care about limiting sharing to legal families. What they care about is limiting sharing, period - they have only grudgingly allowed "family" sharing plans, because of high demand from the classic "2 adults with a few children" households.  But they don't want to allow arbitrary sharing.

They don't care whether the "parents" are husband-and-wife, or, in this era of gay marriage husband-and-husband, or wife-and-wife.

For that matter, the companies probably don't care too much if there are more than two adults in the family: e.g. parents, grandparents, adult children in college or returned to the nest.  The hassles of sharing a credit card make it unlikely that too many contentious adults will be wiling to participate in the "Family Sharing Plan".

Still more: polyamory.  I live in Portland, Oregon: I know people who are in polyamorous families: e.g. 4 adult men, 5 adult women, 3 children, spread across two or more houses or apartments.  Such a polyamorous family could participate in the Apple Family Sharing Plan - as long as they can stomach the shared credit card.   (Although I imagine that Apple probably has an upper limit on number of participants, it would be hard to distinguish polyamory from having the grandparents staying with you. ... I see that Apple limits "family sharing" to 6 people.  Already they are discriminating against nuclear families with more than 4 children! Or 2 grandparents, 2 parents, 2 children. Or ...)


** Nevertheless

But going back to the issue of a family sharing plan that includes grandparents living with parents and children, an increasingly common situation:

What happens if the seniors go senile?  The "cannot reimpose Ask to Buy" rule could make managing that awkward.

It's the same issue as the just-turned-18 kid, although the direction of change is reversed.


** Why "cannot reimpose ask-to-buy after adulthood"?

I am sure that the companies may say that one justification for the "cannot reimpose Ask to Buy" rule is to prevent spouses using the Ask to Buy as a tool, a weapon, to punish the other spouse(s) during marital disputes.

Semi-true --- nobody, individual or corporate, wants to get dragged into these disputes.
Although I am sure it will happen anyway: I can easily imagine a divorce court compelling continued use of a Family Sharing account. 
But it goes further: spouses that separate, then remarry a few years later: can they re-establish a family sharing plan? Or, as I have already mentioned, the parents becoming grandparents scenario.
 In the long run, the "cannot reimpose Ask to Buy" rule is definitely going to discriminate against some legitimate families.

Realistically, of course, the main reason for the "cannot reimpose Ask to Buy" rule is to discourage sharing long after the kids have grown up and moved out - if sharing a credit card is not reason enough.


** Alternatives to Shared Credit Card and the Cannot Reimpose Ask-to-Buy Rule

Given that the de-facto family=shared credit card and "cannot reimpose Ask-to-Buy" rule creates both inconveniences and injustices, could we come up with something better?

I cannot imagine how - unless there was some sort of legal definition of a family, with a registry that was accessible to companies like Apple and Amazon.  Like sharing the parts of your tax return that define "Married filing jointly" with dependents.

And that is not necessarily a good thing for privacy.










Saturday, June 25, 2016

Jamie Dimon disses aggregators (but misses the easy fix)

Jamie Dimon in J.P. Morgan Chase's 2015 CEO Letter to Shareholders:
In the future, instead of giving a third party unlimited access to information in any bank account, we hope to build systems that allow us to “push” information – and only that information agreed to by the customer – to that third party. 
• Pushing specific information has another benefit: Customers do not need to provide their bank passcode. When customers give out their bank passcode, they may not realize that if a rogue employee at an aggregator uses this passcode to steal money from the customer’s account, the customer, not the bank, is responsible for any loss.
In general I am in favor of minimizing access: allowing the customer to specify the minimal information that should be made available from a bank to any financial software or aggregator.  Dare I say "capabilities"?

But I can't help but wonder if Dimon is confounding push/pull and capabilities. And missing the low hanging fruit.

The low hanging rotten fruit in bank and investment company security is having a single passcode that provides read/write access to the entire account.  Give your passcode to an aggregator such as Mint or Yodlee, and you are completely exposed - an aggregator employee gone bad can transfer money out of your account.

(Wesabe [RIP] made a point of never storing credentials on their servers - but then could not provide offline access.  Even Open Source software such as GNU Cash could be exposed - if inserted malware sends credentials off your system.)

SO WHY NOT CREATE A SEPARATE READ-ONLY PASSCODE ?!?!?

(Better yet, a separate read-only privilege authorized by a public/private key handshake.   But that may be dreaming in technicolor (a 1916 technology).)

A separate read-only passcode would enable financial account aggregation - collecting all of your balances from all of your accounts in a single unified view - without allowing the aggregator, or a bad employee thereof, to empty your account.

Sure, a bad guy with your read-only passcode would have a lot of personal information, possibly sufficient to bluff his way into getting more access.   But the key thing is, if that happened, it would be the fault of the bank, or the bank employee, that got bluffed - for not having followed procedures to prevent such bluffing.  Not the customer's fault.

Sure, a separate read-only passcode would enable read-only services, but would not enable services that require read-write access -- hypothetical example, personal arbitrage, transferring cash between banks to the savings and checking accounts that have the best interest rates and fees.  But a read-only passcode would enable an advisor that could recommend such arbitrage: the user would have to enter the read-write passcode to effect the transfer.

So there is a role for Dimon's "only that information that is agreed to by the customer".  Ditto Steve Ellis @ Wells Fargo's API to eliminate screen scraping. I.e. there is a role for capability based security. (You can't expect a capability guy like me to say otherwise.)

But one of the most basic capabilities is read-only access.  To everything.

(OK, not quite everything: Not information that can be used to steal your identity. Not your social security number. Possibly not your address, or full account numbers. But all account types, balances, transactions, interest rates, special deals... Everything except that which can be used directly to steal your identity.)

The most basic financial data account aggregation service that I want is read-only access.  To everything.  At all of my accounts. So that I can see all of my balances and all of my transactions in one place.  So that transactions can be assigned to budget categories.  So that the interest rates and benefits of different bank accounts, investment accounts, and credit cards can be compared automatically.

For example, I regularly travel outside the USA.  In my recent vacation I wasted several hours manually reading credit card companies' "foreign transaction fee" policies, ATM coverage, etc.  This is something that can and should be automated.

This sort of thing is why I get scared, and a little bit righteously angry, when I hear about Wells Fargo's "Data Aggregation APIs" and Dimon/JPMorgan's "pushing only the information agreed to by the customer".

Like I said: I want to be able to delegate read-only access. To all of my accounts. To all of the information. All current information, and all information that may be made available in the future.

Ideally only to a piece of Open Source financial account aggregation software that I trust (that I have read the code of, and review all patches for - yeah, right, in my dreams).  Realistically, to one-and-only-one aggregator that I trust.  Possibly Mint.  I don't trust Intuit/Mint as much as I trust LastPass (malware inside a password manager like LastPass would essentially have read/write access to almost my entire life[*]). But if I knew that an aggregator like Mint had read-only access but no ability to make changes, I would feel a bit better.

I get a bit worried by APIs because they are, almost by definition, limited. What do you want to bet that there is information that is available on the customer-exposed website that is not available to the API?   Screen scrapers will always be with us.   APIs, when they are available, should be more reliable.   But screen scrapers will always be with us - unless companies make a commitment to making APIs universal.   I am not holding my breath.

I get a bit worried by capabilities "pushing only the information agreed to by the customer".   Because the customer has not yet agreed to push the classes of information that did not exist at the time of the agreement.   So you either have open ended "any information of class X, where the provider decides what is and is not class X for new information in the future".  Or the aggregator is denied access to new classes of information until the customer is bothered to authorize access - which is another way of saying "The provider has a de-facto temporary monopoly on certain services."

I get worried by "pushing", because that probably means that the provider (the bank or investment company) can only push to a limited number of recipient services.  While it is good to only provide information to services the customer has authorized, and there is some value to having the provider vet or filter obviously trojan data aggregators --- what do you want to bet that your favorite Open Source software is not on their approved list?  Or that great new Wesabe/Mint/Betterment financial services cloud based startup that you want to use is not on their approved list?

Email is the only really ubiquitous push service, but we can't even get encrypted email security right yet.  I would love to be able to receive my account statements by secure email.  (And I don't mean having to log into a different private webmail interface for each provider.) Again, technicolor.

"Push" is sexy.  "Push" may be good for real-time fraud alerts.  "Push" is good for two-factor authentication.   But for my personal accounts, real-time push is overkill. I am not a day trader or market timer.   (Looks like I missed some profit opportunities with BREXIT because I am sloe to react - especially on vacation.)

--

I get a bit righteously angry about Dimon saying "pushing only the information agreed to by the customer" and "if a rogue employee at an aggregator uses this passcode to steal money from the customer’s account, the customer, not the bank, is responsible for any loss"  because ...  well, that's both true, and bullshit.

If the bank really cared about preventing a rogue employee at an aggregator from stealing money from a customer's account, the bank would provide A SEPARATE READ-ONLY PASSCODE.

All of this talk about APIs and push and capabilities is missing the low-hanging fruit wrt security.

And the banks are missing this easy opportunity to increase security wrt data aggregation

either (1) because the banks are stupid wrt security (which may well be true, but, for gosh sake, they are banks)

or (2) because the banks are trying to come up with ways of monetizing these APIs and push services.


The adage "never ascribe to malice what can be ascribed to stupidity" usually makes me feel better - but not in this case.

---

So I want to make a clear public statement, and I want to encourage as many other people as I can to echo, re-tweet, etc. - whether computer security experts, lawyers, hackers or lay-people:

If a bank gives a customer a single passcode that provides full access 
And the customer gives that passcode to a reasonably reputable aggregator, like Mint or Yodlee 
And a bad employee at such an aggregator uses the passcode to steal money from the customer's account 
THE BANK STILL BEARS CONSIDERABLE RESPONSIBILITY FOR THE LOSS 
Because it is OBVIOUS that the bank could have arranged for a separate read-only passcode, that would not have given the bad employee the ability to steal money from the account.
This was obvious 10+ years ago, when I first started suggesting and requesting this (e.g. in "Contact Us" links on bank and investment company websites).

It was obvious then, and it is obvious now.

It has been obvious for so long that banks have had more than enough time to fix the problem.

If a problem is obvious, and the bank has not fixed it, then it is negligent.

And if a customer is robbed because the bank was negligent, the bank bears considerable responsibility for the loss.

--

The only way we are going to get this fixed is if the banks have financial incentive to fix it.  Losing lawsuits will be part of that incentive.

Lacking such a principle of basic responsibility, the banks will try to "fix" the problem of aggregator security in ways that they can charge for, which risk limiting innovation.

Monday, June 13, 2016

Perl testing: assert_equals('1010.xxx','1010xxx')

I have wasted quite a bit of time with various Perl test packages that either make assert_equals the same as assert_num_equals, or which try to guess whether the arguments should be treated as strings or numbers.

Especially since I often am testing bitstrings for instruction decoding, that often look a lot like integers.

Test::Unit::Assert - search.cpan.org: "assert_equals"

# Or, if you don't mind us guessing
    $self->assert_equals('expected', $actual [, $optional_message]);

I am just bitching,  but I begin to feel that
assert_equals
should be
assert_string_equals
by default, not the less exact
assert_num_equals
 - since the worst that can happen is that a test fails, and you can quickly fix it.

Versus falsely passing a test - bad.

---

I know, I am (probably) not using the best Perl test or assert package.

So many choices! :-(

Sunday, June 12, 2016

Beyond the Mile High Menu Bar

Joel Spolsky discusses Bruce Tognazzini's "Mile High Memu Bar":
Designing for People Who Have Better Things To Do With Their Lives, Part Two - Joel on Software: "the mile high menu bar"

Tog invented the concept of the mile high menu bar to explain why the menu bar on the Macintosh, which is always glued to the top of the physical screen, is so much easier to use than menu bars on Windows, which appear inside each application window. When you want to point to the File menu on Windows, you have a target about half an inch wide and a quarter of an inch high to acquire. You must move and position the mouse fairly precisely in both the vertical and the horizontal dimensions.
But on a Macintosh, you can slam the mouse up to the top of the screen, without regard to how high you slam it, and it will stop at the physical edge of the screen - the correct vertical position for using the menu. So, effectively, you have a target that is still half an inch wide, but a mile high. Now you only need to worry about positioning the cursor horizontally, not vertically, so the task of clicking on a menu item is that much easier.
Based on this principle, Tog has a pop quiz: what are the five spots on the screen that are easiest to acquire (point to) with the mouse? The answer: all four corners of the screen (where you can literally slam the mouse over there in one fell swoop without any pointing at all), plus, the current position of the mouse, because it's already there.
Valid. Most self taught UI programmers eventually figure this out for themselves.

But it misses something.  (Some things.)


(1) What goes up (to the menu bar) must go down (back to the current position)

Often, usually, when you go to the menu bar at the top, you also have to GO BACK to where you were.  And that place is no longer one of the "five easy places".

One of the things I used to most love about using voice commands in conjunction with a pointing device (whether mouse, trackball, pen or touch) is that I could say "make pen red" without having to change the cursor position.   Voice has higher valency than menus and pointers - it is much easier to get to a large number of commands, like colors and pen widths using voice than it is using a pointing device.

Context menus, drawn at the current position, help.   But often, usually, the command that I want is not at the current context menu.  Joel does not like options or customization, but I would love to be able to customize my context menu with the things I do most often.   Or, rather, perhaps I would like to have my context menu CUSTOMIZED FOR ME, rather than me having to figure out how to do it myself.

(Why don't I use voice commands much any more?  Because they kept breaking with every new OS release. It was a pain to maintain them.  And because Linux systems don't have voice support worth a damn, certainly not comparable to Windows. Or, if Apple and Linux do have voice support, it is just yet another thing that I have to set up.)

Pen and touch computers, interestingly, make it easier to move to buttons all over the screen.  It is also easier to move back - but moving away and moving back is still a pain.

In the really old, old, days we sometimes had two pointing devices, two trackballs - or, equivalently, a switch that allowed a stationary relative pointing device to control two or more pointers.    So you could switch to "the menu pointer" choose a command, and then switch back to "the document pointer".

(Stationary like a trackball.   Can also work with a mouse, although not so nicely - although a mouse is relative, people often treat a portion of their mousepad as almost an absolute system.   Warping to menu and back is disconcerting if you are thinking as if your pointing device is absolute, or nearly so.)

I don't think non-power users would want such a multi-pointer setup.  Except possibly saying "Select menu", and then "Take me back".  


(2) Multiple Screens

For example, here is my current screen arrangement: I am currently typing near the bottom of the big 30" screen in the middle.



On the MacBook I am currently using, to get to the menubar at the top of the screen I have to move approximately 15" vertically. Using my trackball.

(By the way, I find using a trackball more accurate than using a mouse, unlike Joel.  I think because I use a big 2" trackball that I can spin with full forearm and wrist motions, or tickle with my fingertips, as opposed to the old thumb ball that Joel talks about. The original trackball, by the way, was a bowling ball.)

Even using my trackball, it is a pain to go from the bottom to the top of the screen.   But it is even worse to have to reposition into the middle to get back to where I was.

By the way, I have that funny arrangement of screens

(2a) because my screens are not all the same size: laptop (MacBook), the biggest display I was able to buy (30"), and my old, now second biggest, display (24") which is now a secondary display.

(2b) offset in that funky way so that I have more of those supposedly easy to get to corners that Tog talks about.   E.g. I have 3 corners in the big 30" monitor, 3 on the 24", but only 2 on the laptop.  There's no way to get 4 corners on the main display while keeping the 24" display in portrait mode.

By the way, I have often wished that the pointer could not cross from one screen to the other along the entire edge of the screen but instead, could only cross in certain places, leaving the other parts as "walls" that you can jam up against.



Similarly, such walls can also be inside a really large screen, giving you more "easy to jam up against" places that you can use for controls.



(3)  While we are at it: virtual reality displays may have no natural boundaries to jam up against.  No edges.  360 degrees, on all axes.  Synthetic user interface boundaries may be more necessary - or, getting rid of the edges as command points.















Things You Should Never Do, Part I - Joel on Software

Things You Should Never Do, Part I - Joel on Software: "the single worst mistake that any software company can make:

deciding to rewrite the code from scratch"



'via Blog this'

I must learn to live with broken software

I must learn to live with broken software.



When software tools that I have to use are broken, I must work around the brokenness as quickly as possible.



I must resist the temptation to try to figure out what the brokenness is. Unless doing that is quick.   I must not waste hours banging my head against stupid tools like Perforce.



This is especially important for commercial software.   With Open Source software, at least there;'s a chance that I can fix the brokenness - but with proprietary software, that is unlikely.



Although I believe that it is important to report bugs so that they can be fixed,



When something is broken, I need to find the quickest path around the brokenness.



---



The worst brokennesses for me are the ones that are only a little bit broken.  That mostly work, except for some stupid thing, that one might hope can be fixed with little duct tape and shell script wrapper.



---



I went looking for inspiring quotes about this topic - advice on how to decide quickly whether it is worth trying to work through versus work around software brokenness.



This is the closest I have come, from the original wiki: http://c2.com/cgi/wiki?DoesSoftwareMakeUsersHappy:  Does Software Make Users Happy: "Techies have accommodated to broken software"



Not so great in context, since the poster is making an argument for SW perfectionism.



---



That may be the pithy phrase that I am looking for:



How to know when it is better to work around versus working through software problems.



Especially for software that you are using, not producing.



Probably also beyond software.


Mac OS X 'LSOpenURLsWithRole() failed with error -10810 for the file ...'

BRIEF:



LSOpenURLsWithRole() failed with error -10810 for the file ...


can be caused by trying to open a script
that itself calls /usr/bin/open

I.e. MacOS /use/bin/open is not trivially reentrant.

DETAIL


Alvin Alexander provided some clues to this MacOS error:

Mac OS X 'LSOpenURLsWithRole() failed with error' | alvinalexander.com



I had a slightly different cause of

LSOpenURLsWithRole() failed with error -10810 for the file ...



I have some shell scripts that call "open SOME-MAC-APP",

where open = /usr/bin/open

$ bash 1349 $>  open -h
Usage: open [-e] [-t] [-f] [-W] [-R] [-n] [-g] [-h] [-b <bundle identifier>] [-a <application>] [filenames] [--args arguments]
Help: Open opens files from a shell.
      By default, opens each file using the default application for that file.
      If the file is in the form of a URL, the file will be opened as a URL.
Options:
      -a                Opens with the specified application.
      -b                Opens with the specified application bundle identifier.
      -e                Opens with TextEdit.
      -t                Opens with default text editor.
      -f                Reads input from standard input and opens with TextEdit.
      -F  --fresh       Launches the app fresh, that is, without restoring windows. Saved persistent state is lost, excluding Untitled documents.
      -R, --reveal      Selects in the Finder instead of opening.
      -W, --wait-apps   Blocks until the used applications are closed (even if they were already running).
          --args        All remaining arguments are passed in argv to the application's main() function instead of opened.
      -n, --new         Open a new instance of the application even if one is already running.
      -j, --hide        Launches the app hidden.
      -g, --background  Does not bring the application to the foreground.
      -h, --header      Searches header file locations for headers matching the given filenames, and opens them.



such as "open /Applications/p4v.app".





Trying to make a shell script that calls open

into an app that can itself be called via open seems to be not supported

- i.e. open is not reentrant, and least not trivially.



It is this "open within an open" that was causing

LSOpenURLsWithRole() failed with error -10810 for the file ...


I have two scripts, p4v (which calls "open /Applications/p4v.app), and p4v-ag (which executes the executable with that app)

$ bash 1258 $>  diff p4v p4v-ag
16c16
< P4_EXECUTABLE=/Applications/p4v.app
---
> P4_EXECUTABLE=/Applications/p4v.app/Contents/MacOS/p4v
18c18
< open "$P4_EXECUTABLE" $@
---
> "$P4_EXECUTABLE" $@




 The p4v-ag script (the other can be inferred)

$ bash 1259 $>  cat p4v-ag
#!/bin/sh
# Simple script to run Perforce (P4) commands on macos
export P4CONFIG=.p4config
P4_EXECUTABLE=/Applications/p4v.app/Contents/MacOS/p4v
"$P4_EXECUTABLE" $@
I make both of these shell scripts into MacOS apps

$ bash 1261 $>  macos-appify p4v
/Users/glew/bin/p4v.app
$ bash 1262 $>  macos-appify p4v-ag
/Users/glew/bin/p4v-ag.app
Running the open with open gives the error:



$ bash 1263 $>  open ./p4v.app
LSOpenURLsWithRole() failed with error -10810 for the file /Users/glew/bin/p4v.app.
✗ 
Running the other does not.

$ bash 1264 $>  open ./p4v-ag.app
✓ 

Tuesday, June 07, 2016

Macro - Simple code templating mechanism - for Perl

Macro - search.cpan.org: "Macro - Simple code templating mechanism"



Too bad ... doesn't seem to work.   Perl v5.18.2



I was driven to try this because I am tired of typing  Data::Dumper's repetition.





'via Blog this'

Thursday, June 02, 2016

Should asserts be disabled in production?

I used to be quite annoyed by Perl CPAN's

Carp::Assert: "assert(...) if DEBUG;"
i.e. by the fact that you have to append "if DEBUG" to an assertion in order to get it disabled in production.



Now, I have flipped.



I often want assertions enabled in production code. At least cheap assertions.  In fact, I might go so far as to say "assertions should be enabled in production code by default, unless you explicitly say not to".



In which case, the Perl Carp::Assert syntax is not totally unreasonable.



I might prefer the Pythonish



if DEBUG: assert(...)



so that you can see the "if DEBUG" right up front.



But then again, Python's assert is disabled when you run Python "optimized" via python -O.



I was somewhat flabbergasted when I encountered Python style guides that recommended not using asserts, or specifically not using asserts inside library functions.    (Can't find a reference right now, so perhaps this is rare advice.)



Reason: since Python assert is disabled when you run optimized -O (which is common in some worlds), and since libraries cannot be sure that somebody competent is calling them, libraries should do

if !valid(parameter): raise someException
rather than

assert valid(parameter)
This advice is accurate, but unfortunate.  I always prefer concise code, and "assert" is more concise than "if...raise".   And probably less fragile, since exception hierarchies have a habit of changing.





So I can't blame Python, but I can still regret it.






Principle of Smallest Scope

My mini-flame about a comment that might be interpreted as idiotically forbidding declaring a named subroutine inside a named subroutine inspired me to write this blurb about "The Principle of Smallest Scope".



I don't think I originated this principle, although I have practiced it for (ahem) decades, in (ahem) two different centuries. (ahem, probably scared off lots of headhunters: too old)



However, googling could not find me a reference for 'Principle of Smallest Scope'. Perhaps after I write this it will.  (Perhaps also I just misremembered the name - "Principle of Least Scope"? "Principle of Most Restricted Scope"?  If so, I will update this blog with a pointer to a better name.)



The Principle of Smallest Scope is much like the Law of Demeter, although that is more about object oriented design, whereas POSS applies to non-object systems.  Both are just particular forms of information hiding.



So, what do I mean by the 'Principle of Smallest Scope'?



I mean that definitions, both named and anonymous, should be restricted to the smallest reasonable scope.



E.g. it is bad to use a global variable when a procedure local variable is sufficient.



E.g. within a procedure, you may have temporary variables - e.g. small local common subexpression elimination.

...
ret := big_complex_function_returning_struct(...)
print ret.to_string()
assert( not ret.is_error() )
next_function( ret.field1, "do more stuff" )
...
if idioms like this are used over and over, then I think you should restrict the scope of ret, if your language permits it:

sub some_function () {
    ...
    {
        ret := big_complex_function_returning_struct(...)
        print ret.to_string()
        assert( not ret.is_error() )
        next_function( ret.field1, "do more stuff" )
    }
    ...
    {
        ret := another_big_complex_function_returning_struct(...)
        print ret.to_string()
        assert( not ret.is_error() )
        another_next_function( ret.fieldA, "different but similar" )
    }
    ...
}

rather than
sub some_function () {
    ...
    ret0 := big_complex_function_returning_struct(...)
    print ret0.to_string()
    assert( not ret0.is_error() )
    next_function( ret0.field1, "do more stuff" )
    ...
    ret1 := another_big_complex_function_returning_struct(...)
    print ret0.to_string()
    assert( not ret1.is_error() )
    another_next_function( ret0.fieldA, "different but similar" )
    ...
}
E.g. it is good for a programming language to support scopes that are smaller than function scope.

    Python fails this.



E.g. it is good to be allowed to declare named subroutines within named subroutines - and for it to be illegal to access the inner named subroutine outside of its enclosing scope.  (At least, illegal by default.)

    Perl fails this.



E.g. ditto nested classes.  Lexically nested whatevers.









Now, I admit: masking of names can be confusing.  It may be good to warn.   It may also be good to require explicit override.





More later...















 









Variable will not stay shared in subroutine

Variable will not stay shared in subroutine: "The correct solution is not to declare a named subroutine inside a named subroutine."
'via Blog this'
This sort of comment pisses me off.   Nesting and lexical scope are some of the most valuable concepts in programming language design.



Just as security has the Principle of Least Privilege, programs should usually take advantage of the Principle of Smallest Scope.



Temporary values should have the smallest scope possible.



Helper functions are just temporary values that happen to be code.  It should be possible to define lexically nested functions that are not visible outside the lexical scope.



Perhaps the author of the comment that pissed me off may have meant to be more specific: "Perl's implementation of 'nested subroutines' is misleading, and should not be used because of this possible confusion."



Not least because Perl's 'nested subroutines' are not really lexically nested.  Instead, they are (as poster Ovid earlier in the thread before the4 comment that pissed me off says):

[Named] subroutines are entries in the current namespace's symbol table (in a typeglob slot) and this does not allow for nesting. As the docs explain, this can be gotten around by using an anonymous subroutine because these get stuffed in a scratchpad, thus making them lexically scoped.
Also, it is quite confusing that 'nested name subroutines' and 'nested anonymous subroutines' have such different behavior.



It is also fair to say that this specificity may have been implied by context.  I should not get pissed off so easily.



But on the technical issue, yeah, Perl's funky support for not-really-lexically-nested-named-functions is a piss-off.



---



I think I am annoyed by this because recently I have writing Python to use pyobjc on my Apple MacBook  - and Python does not have lexical scopes apart from functions and modules.  So Python does not support the Principle of Smallest Scope.  At least up to Python3.  Perl mostly supports the Principle of Smallest Scope - except for stupidities like this not-really-lexically-nested-named-functions.