Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Tuesday, December 22, 2015

Perl one-liner to run all tests in a directory tree

This is a Perl one-liner to run all tests in a directory tree:
perl -e 'use File::Find; finddepth(\&wanted,"."); sub wanted { if( $_ eq "Makefile" ) { system("echo $File::Find::name; make -k test"); }} '
OK, I admit it is a rather bloated one liner.

Quick and Dirty Test Suite

I remain an advocate of a quick and dirty test suite structure that is assembled in a directory tree, with tests in directories marked by a Makefile (with a test target), or a shell script, or...

This is an easy way of integrating tests that come from different test suites, with different assumptions.  It is easier than having to write an adapter from your-favorite-Perl-Test::Unit-test-suite to your favorite Javascript-test-suite, and vice versa, ad infinitum.   Actually, the directory structure, and the convention that there is a Makefile with a test target is an adapter - but a fairly low level, universal, adapter.

I usually accompany something like this one-liner with a script that greps for common indications of test failure, such as the string 'TEST FAILED'.  I also like positive indications of success, such as 'TEST PASSED'.  I like being able to create a green bar widget.

I am not a big fan of Makefiles as a scripting solution - but they are fairly ubiquitous.  A Makefile with "make test" is more standard than a shell script called something like 'run-test.sh'.

The above one-liner does not make any provision for setup - environment variables, etc.  I assume that is either done before invocation, or is encapsulated by "make test".

CON: stopping recursive traversal

The above one-liner does not make allowance for Makefiles where 'make test' knows how to recurse into subdirectories.  Indeed, the above one-liner is how some of my quick and dirty test suites recurse over submodule tests.  Such recursive Makefiles used in conjunction with the above one-liner may result in tests being run more than once.

The above one-liner lacks an important feature: stopping.  Or, rather, it has the bad feature that it walks the entire tree - it does not have the ability to stop when certain conditions are met, such as a directory named 'Foo.do-not-follow'.

It is important to have the stopping feature, because it is quite likely that some foreign, imported, test suite also uses the same convention, of Makefiles with 'Make test' targets.  But may require some setup.   In such a case, it may be desirable to have:
  • .../ImportedTestSuite-adapter
    • Makefile - a makefile suitable for my quick and dirty test suite, with 'make test' target.  Sets up for the stuff below
    • ImportedTestSuite
      • the foreign, imported, test suite, which should NOT be run directly by my quick and dirty test driver; ideally, its subtree should NOT be walked
      • Makefile - a makefile for the foreign, imported, test suite, which should NOT be run by my quick and dirty test suite
I sometimes implement stopping with a naming convention, like
  • .../ImportedTestSuite-ADAPTER
    • run 'make test' in this directory, but do not traverse further
  • .../ImportedTestSuite-adapter/Makefile.MY-TEST-SUITE
    • look for this specially named Makefile, but do not traverse further
  • .../ImportedTestSuite-adapter/ImportedTestSuite.SPECIAL-NAME
    • do not run 'make test', do not traverse further
    • CON: I like keep a test suites own naming scheme
  • .../ImportedTestSuite-adapter/IMPORT/test-suite's-own-name
    • PRO: allows using a foreign test suite's own name for itself
    • COM: the import directory is often almost empty, containing only a single subdirectory.  (Although this may change if it is one of a family of test suites, or if you keep multiple versions.)
Naming conventions are easy, but applying them to a directory can produce very long and clumsy pathnames, and sparse subdirectories.  The pathname length can be a problem on OSes with relatively small pathname length limits, like Windows or POSIX.

Using special files at places in the directory, like 
  • .../ImportedTestSuite-adapter/Makefile.MY-TEST-SUITE
  • .../ImportedTestSuite-adapter/MY-TEST-SUITE.CONF
    • with an embedded directive
places less pressure on pathname length.   But it requires controlling the traversal.  Whereas aa naming convention inside a directory name is trivial to filter out.




Friday, December 11, 2015

I dislike Perl's lib/Module.pm AND lib/Module/SubModule.pm

I dislike how Perl requires modules, and the submodules which are really "internal" to them, to live in two places in the filesystem - even if those places are right next to each other.

E.g. the module's top level code in some lib/Module.pm

And the code for any submodules that are really part of the module in the directory tree under lib/Module, e.g. lib/Module/SubModule.pm

So you cannot simply operate on one filesusem object for the module: e.g.g you cannot do:

mv lib/Module some-other-lib-path/Module

Instead, you must do
mv lib/Module.pm some-other-lib-path/Module.pm
mv lib/Module.pm some-other-lib-path/Module.pm 
A minor difference, two commands rather than one, something that can odten be patched over by regexps.

But an important difference: I would go so far as to say that the representation of Perl modules (classes) in the filesystem is not object oriented.  I would say that one of the key characteristics of an object, in this class treating the code for a class as an object, is that it behaves like it is a single object, unless explicitly opened up to look inside.

---+ Bad Influence on Perl CPAN class structure

I think this decision, to have lib/Module.pm and lib/Module/SubModule.pm, has had a bad, or at least confusing, influence on CPAN's module structure.

Sometimes a CPAN module Foo::Bar (lib/Foo/Bar.pm) is actually a module completely unrelated[*] to module Foo (lib/Foo.pm). More confusing if module Foo actually has some internal modules Foo::Fu (lib/Foo/Fu.pm).

(Note *: OK, not "completely unrelated".  How about "unrelated wrt code structure", or "related only by topic, but not actual code".)

Then there is no localization of Foo in the filesystem.   Some parts of lib/Foo are part of module Foo, and some are not.   And not everthing in module Foo is under lib/Foo.

I.e. the Perl CPAN filesystem structure sometimes reflects module structure, and sometimes it just reflects theme.

---+ Kluges

For this reason, I often make my modules appear as Foo::Foo, i.e. lib/Foo/Foo.pm.

But this can become tiresome.  Tiresome. Repetitious.  And it does not prevent somebody else from defining Foo::Bar, and wanting to live in the same directory tree (if not in separate PATH elements).

So I might try Topic::Long_Module_Name_Unlikely_To_Conflict::Short_Module_Name

i.e.
      lib
           /TopicLong_Module_Name_Unlikely_To_Conflict
               /Short_Module_Name.pm
               /Short_Module_Name/Sub_Modules...

e.g.  lib/Foo/AG_Foobar/Foo.pm

Not ideal.

Similarly, I might use a level of indirection to group internal submodules

Foo::internal::Submodules
vs
Foo::Unrelated_Submodules

Again, not ideal.    

---+ What is really needed

Modules should correspond to subtrees of the filesystem.

e.g. lib/Foo.pm, if no furter structure.

lib/Foo.pmdir/Foo.pm  if submodules such as lib/Foo.pmdir/Sub.pm

with it being an error to have both lib/Foo.pm and lib/Foo.pmdir exist.

Or, if you will, require the indirection even if no further structure: lib/Foo.pmdir/Foo.pm, and never lib/Foo.pm

The main file might be called lib/Foo/main.pm.  But I rather like lib/Foo/Foo.pm, or lib/Foo.pmdir/Foo.pm





---+ References

use - perldoc.perl.org: <>

require - perldoc.perl.org: <The require function will actually look for the "Foo/Bar.pm" file in the directories specified in the @INC array.>>



'via Blog this'

Finally got emacs compilation-error-regexp working for perl CPAN Test::Unit::TestCase

Summary: Finally got emacs compilation-error-regexp working for perl CPAN Test::Unit::TestCase



Not only did I need to add a regexp,
but I also needed to disable a compilation-mode-font-lock-keywords pattern
- apparently there may be some phase ordering.

This woukd be more reliably if there were the ability to return ALL possible substrings matching a regexp,
as opposed to being maximally or minimally greedy.

    ;; adding patterns to compilation-error-regexp-alist and/or compilation-error-regexp-alist
    ;; to try to get a perl CPAN Test::Unit error message, like

    ;; There were 3 failures:
    ;; 1) TestCases_for_FrameMaker_MIF_AG.pm:875 - test_another_error(TestCases_for_AG_FrameMaker_MIF)
    ;; expected '', got 'deliberate mismatch'
    ;;
    ;; 2) TestCases_for_FrameMaker_MIF_AG.pm:879 - test_yet_another_error(TestCases_for_AG_FrameMaker_MIF)
    ;; expected '', got 'deliberate mismatch'
    ;;
    ;; 3) TestCases_for_FrameMaker_MIF_AG.pm:898 - test_special_stuff__format_tuple_to_MIF_string__Cell_context__WIP(TestCases_for_AG_FrameMaker_MIF)

    (require 'compile)

    (add-to-list 'compilation-error-regexp-alist
'("\nThere were [0-9]+ failures:\n1) \\([a-zA-Z0-9_.-.---]+\\):\\([0-9]+\\) - test_"
  1 2)
    (add-to-list 'compilation-error-regexp-alist
'("\n[0-9]+) \\([a-zA-Z0-9_.-.---]+\\):\\([0-9]+\\) - test_"
  1 2)
      )
    (if nil  ;; the regexp below causes compilation-error-regexp-alist to stop working completely - no matching
      (add-to-list 'compilation-error-regexp-alist
'("\nThere were [0-9]+ failures:\n1) .*\\(\\w\\|\\W)*\n[0-9]+) .*\\(test_\\)"
  2)
)
      )


Diffs:


    === modified file '.emacs'
    *** .emacs 2015-12-12 00:18:53 +0000
    --- .emacs 2015-12-12 00:26:30 +0000
    ***************
    *** 4129,4139 ****
     ;; perl CPAN Test::Unit
     ;; 1) TestCases_for_FrameMaker_MIF_AG.pm:763 - test_special_stuff_to_insert_wip(TestCases_for_AG_FrameMaker_MIF)
     '(("\\([a-zA-Z0-9_.-.---]+:[0-9]+\\)"
    !      ;;(0 'compilation-line-face nil t)
    !      ;; (0 'ag-test-result-unexpected) ;; works (modulo filename matching)
    !      ;;(1 'ag-test-result-unexpected) ;; works (modulo filename regexp
    !      (1 compilation-line-face nil t)
    !      ))
     '(("expected\s+.*got\s+.*"
 (0 'ag-test-result-unexpected)
 ))
    --- 4129,4139 ----
     ;; perl CPAN Test::Unit
     ;; 1) TestCases_for_FrameMaker_MIF_AG.pm:763 - test_special_stuff_to_insert_wip(TestCases_for_AG_FrameMaker_MIF)
     '(("\\([a-zA-Z0-9_.-.---]+:[0-9]+\\)"
    ! ;;      ;;(0 'compilation-line-face nil t)
    ! ;;      ;; (0 'ag-test-result-unexpected) ;; works (modulo filename matching)
    ! ;;      ;;(1 'ag-test-result-unexpected) ;; works (modulo filename regexp
    ! ;;      (1 compilation-line-face nil t)
    ! ;;      ))
     '(("expected\s+.*got\s+.*"
 (0 'ag-test-result-unexpected)
 ))
    ***************
    *** 4628,4637 ****
      (add-to-list 'compilation-error-regexp-alist
 '("\nThere were [0-9]+ failures:\n1) \\([a-zA-Z0-9_.-.---]+\\):\\([0-9]+\\) - test_"
    1 2)
)
    ! (if nil
(add-to-list 'compilation-error-regexp-alist
    !     '("\nThere were [0-9]+ failures:\n1) .*\\(\\w\\|\\W)*\n[0-9]+) \\([a-zA-Z0-9_.-.---]+\\):\\([0-9]+\\) - test_) "
    !        2 3)
 )
)
    --- 4628,4640 ----
      (add-to-list 'compilation-error-regexp-alist
 '("\nThere were [0-9]+ failures:\n1) \\([a-zA-Z0-9_.-.---]+\\):\\([0-9]+\\) - test_"
    1 2)
    + (add-to-list 'compilation-error-regexp-alist
    +     '("\n[0-9]+) \\([a-zA-Z0-9_.-.---]+\\):\\([0-9]+\\) - test_"
    +        1 2)
)
    ! (if nil  ;; the regexp below causes compilation-error-regexp-alist to stop working completely - no matching
(add-to-list 'compilation-error-regexp-alist
    !     '("\nThere were [0-9]+ failures:\n1) .*\\(\\w\\|\\W)*\n[0-9]+) .*\\(test_\\)"
    !        2)
 )
)



Friday, December 04, 2015

p4 status - misleading Perforce command name

So, I want to run a command in a script that asks Perforce if a file exists in the depot.



I wonder what such a command might be callewd. Let's try "p4 status".



Strange... google "p4 status".



!!!! "p4 status"  `Open files for add, delete, and/or edit in order to reconcile a workspace with changes made outside of Perforce.1





This is stupid.  In English, in 99.99% of programming systems, "status" is a read-only query.


Wednesday, December 02, 2015

.ignore files for version control systems

It is good to have .ignore files at the root of a project's repo. (bzr, hg, git, but not Perforce)

It is good to have global ignores - as the bzr manuals say "ignored files which are not project specific, but more user specific. Rather than add these ignores to every project, bzr supports a global ignore file ~/.bazaar/ignore

In addition to "per-user global", it is also good to have "global across an installation". Git has this, but not bzr.

How to Get a Refund For an iPhone, iPad, or Mac App From Apple

How to Get a Refund For an iPhone, iPad, or Mac App From Apple:



'via Blog this'



Many iPhone apps do not work as expected.



But getting refunds for them is more painful than on Android.



Another reason to prefer Android - if they can stop the security bleeding.


Wednesday, November 11, 2015

I hate Outlook!: the continuing whining, bitching, and moaning

I very carefully recovered circa 251 emails that were misclassified as spam.

Bug reports that I wanted to ensure were not lost, etc.

And then I accidentally deleted them. Wrong folder.

No undo.



I hate Outlook!

Friday, November 06, 2015

Stupid Blogger does not understand difference between symbolic link and data deduplication

See the quote from a UNIX bigot's blog at bottom.



Now, I am a UNIX bigot.  But I also try to be an unbiased UNIX bigot.



In particular, I understand the difference between data deduplication - a filesystem where two completely separate files that happen to have the same contents will be transparently linked, behind the user's back, to share physical storage.



Linked in such a way that, if one file is modified and the other is not, then the change is NOT propagated to the other.



I.e. linked transparently to the user.  Invisibly, except for occupying less disk space.



Symlinks are sometimes used as a "poor man's" approximation to this.  But symlinks are definitely not transparent.  E.g. if the target file is removed, the symlink dangles.  That should not happen in a proper "single instance store".



For this matter, UNIX hardlinks can be used as a "poor man's" approximation to this.   But, again, not transparent.



Now, I do not know if Microsoft's single instance store is fully transparent.  But I suspect it is - or at least more so than symlinks or hardlinks.



The Civilized Explorer Travel Bizarre Link Page: "Microsoft Innovations. This is an actual press release dated February 28, 2000, on the actual Microsoft Web site wherein Bill Bolosky and two Microsoft colleagues claim to have invented symbolic links three years ago!
... an idea occurred to them -- why not save operating system disk space by storing duplicate files as links that point to a single file housed in a central location?
Are these guys brilliant ...
During the next 1-1/2 years, Bolosky, a researcher in Microsoft Research's Systems and Networking Group, and three of his researchers worked full time with the Windows 2000 team to build the technology, now known as the Single Instance Store.
Or what?
And you thought Microsoft's only innovations came from other companies that they either bought or crushed."


Stupid UNIX bloggers unite!  You have nothing to reveal to the Internet except your ignorance, and inability to absorb new concepts!



'via Blog this'

Thursday, October 15, 2015

Google Chrome browser shows corrupted content on Mac OS X – DisplayLink Support

Google Chrome browser shows corrupted content on Mac OS X – DisplayLink Support: "This issue is due to the browser attempting to use hardware acceleration functions not currently available on virtual screens."



I got bit by this bug - disabling hardware graphics acceleration in Chrome, but not elsewhere, fixed the problem, as recommended by DisplayLink.



I wonder why the DisplayLink cannot hook the acceleration calls, and report ion error when applied to their display?



But I also wonder why Chrome does not check the capabilities of the display.  Here, the problem is probably confusion, since some of my displays have acceleration, and some do not.   Heterogeneity - gotta love it!  



Q: why does MacOS expose use of accelerators to apps?





'via Blog this'

Thursday, October 08, 2015

OSX HFS+ case insensitive by default - a security bug waiting to happen

I was having problems installing cpanm modules on my MacBook.

Turns out I have had a script ~/bin/CC in my path since circa 1980. cc plus some pleasant defaults. It has worked from UNIX v6 through v7, Eunice, BSD 4.1, 4.2, 4.3, SVr4, Xenix, Gould UTX, Linux, cygwin... and it failed for the first time on MacOS, infinite recursion. 

I wondered if HFS+'s case insensitivity could be exploited for a security hole. Googling reveals that the problem has already been encountered article.gmane.org/gmane.linux.kernel/1853266itworld.com/article/2868393/… January 2015 Although fixed in Git, this is an exploit waiting to happen, for Mac users who have ever installed software from some other UNIX-like case-sensitive system. For that matter, it is probably a potential security hole for code ported from case sensitive iOS to Mac OS X.
 

Tuesday, September 29, 2015

Merging in bzr: The git approach vs the bzr approach

Nice comparison of the git vs bzr meeging approaches.  



Git: "The changes appear to emerge fully-formed with no evidence of the process which created them."



Bzr: "the project's revision history will show the actual process the developer went through to create the branch."



comparison



Since some [D]CVSes support both styles - heck, even git and mercurial support both styles, and I think even bzr does - it may be more accurate to call it "clean history" versus "actual history" approaches.



The availability of a "rebase" tool is the main thing that enables the "clean history" approach.





Myself, I am unambiguously an "actual history" advocate.  "Those who cannot remember the past are condemned to repeat it".





But I can understand why so many Linux developers want a clean history.





Myself, I want both: the actual history, and possibly a clean history that is what you see by default, when the actual history is pretty messy.  (And believe me, I have seen the actual history get very messy.)





E.g. if you haven't rebased a task branch before merging, I want to see it in the style that the page depicts as bzr-style.  But I suppose that it is okay to "fold" it into the trunk, if there has ben no trunk activity in the meantime.   And if every checkin on the task branch is release-worthy.   But if there are any checkins on the task-brach that were half-assed, that you might not want to bisect to, then no, I don't want it folded in.



But if you have rebased a task branch, because the trunk has been modified since the task branch was originally forked, and have tested all of the intermediate points on the rebase, then I want to see BOTH in the actual history.



I want to see the original, pre-rebase, task branch.   Where the work was actually done.



But I am okay on seeing the rebased task branch, and even having it folded into the trunk,



I am okay on only presenting ONLY the rebased task branch and/or folded into the trunk, by default, HIDING the original pre-rebased task branch.  But this is by default only.  I would want to have some sort of indication that says "there's more history here".



Why?



Because I don't believe that you can completely test correctness.



Because experience shows that there is always a chance that the rebased task branch, folded into the trunk, will have a bug. A bug that occurred in the rebased task branch, but not in the original pre-rebased task branch. Essentially a bug caused by interference between whatever happened on the trunk and whatever happened on the task branch.



Even if your full test suite has been run on all of the pre- and post- rebase checkins.   Because sometimes the test suite doesn't test for everything.



Sure, oftentimes it will not matter - you bisect on the rebased and folded task checkins, and find what the bug is.   But sometimes  it is good to understand why the bug occurred, not just what it is.





Rebasing, and other history rewriting mechanisms, have two functions IMHO:



1) cleaning up the history



2) as debugging tools



E.g. if you have a task branch, which has passed te test suite at all checkins, and a trunk which has similarly passed the test suite, and you merge - and there is a failure.  But then obviously the failure is due to an interaction.



Creating a rebased clone of the task branch is then a debugging tool - you can see which of the task branch checks, which all passed tests pre-rebasing, causes tests to fail post-rebasing.



This is a damned useful thing to do even if the rebased task branch is not checked in / pushed to some more public repo - even if all you do is a branch merge.



E.g. sometimes I use a rebase just for debugging, and then through out the rebased branch.





What I do object to, however, is rebasing and NOT testing all of the intermediate checkins.













Merging in bzr: The git approach vs the bzr approach: "often"



'via Blog this'

Monday, September 21, 2015

Twitter's Tips for Making Software Engineers More Efficient - IEEE Spectrum

Amen!!!!!!!!  (sez me, who has been dealing with FrameMaker and Parallels crashing many times a day.)



Twitter's Tips for Making Software Engineers More Efficient - IEEE Spectrum: "Reducing the number of times tools break—even if each incident just causes an interruption of a minute or two—can bring about huge productivity improvements. Seibel explains that each interruption takes an engineer out of “flow,” and studies show that it typically takes 15 minutes to enter a flow state. "



'via Blog this'

Thursday, September 17, 2015

Zero - smart and secure email client for your inbox on the App Store

I just installed the "Zero" iPhone app - and I *love* it.  It allowed me to plow through 200+ (I think 300+) emails in my Gmail Inbox in less than half an hour.  Whereas I only had managed to clear out a dozen in an hour using Google "Inbox for Gmail" earlier today.



I love Zero enough that I paid 5.99$ for an upgrade to unlimited accounts within an hour or starting to use it.  But, see below.



And I cleared out my Gmail Inbox using Zero while walking on my treadmill desk!!!   OK, I admit that I do everything except detailed drawing and coding while walking on my treadmill desk.  But Zero was easier to use than most other apps.



Zero is a lot like Triage (http://www.triage.cc/) - an email app that I used to use on Android, although the web page only mentions IOS at the moment.  I gave up Triage a while back, because back then (1) it required VPN to run on my phone in order to connect to my mailserver - which I am guessing meant that it did not use ActiveSync to connect to Microsoft Exchange) - which was suboptimal because it meant more hassle, and a long password, to type in when I wanted to quickly triage my email, and (2) back then Triage crashed a lot - sometimes seeming to delete email.

      But when Triage was connected and was not crashing, it was sweet!



Both Zero and Triage do one job really well.  They aren't full mail readers.  What they do is allow you to quickly go through your Inbox, quickly archiving or discarding low priority email, allowing you to manually filter out low priority stuff so that only important stuff is left for you to handle, using a more powerful mailreader. Both Zero and Triage have some ability to respond to forward, email, or create new email - short messages, after which you archive or delete.  But longer replies you would want to handle somewhere else.   (Back when I was using Triage, most of the crashes and lossages were related to replying to or composing new email.



The key reason why Zero and Triage are more efficient for this particular email task is their user interface: they both use a "card" interface.  One email per card.



The card shows a lot of a message - often the whole thing.  Imagine the "preview" lines you get in a typical mailreader, but expanded to a screenful per message. You can tap on an email to truly see the whole thing, with HTML formatting, etc.  But the "large preview" you see on the card allows me to process emails more quickly, much more often than the "short previews" you see with regular mailreaders like Outlook or Inbox or Mail or ...



And then the actual processing is easy: in Zero's case, swipe upwards to archive a message, so that it no longer appears in your Inbox. Swipe left exposes a screen to the right where you can see the full email.  Tap a star to leave the mail in the Inbox.



Yes, this is not that different from the swipe actions of your favorite mailreader. Swipe right to archive, swipe left to schedule. (AFAICT neither Zero nor Triage have a "Defer" feature.)



But the cards, allowing a large amount, often all, of the email to be seen, and the simple swipe, allows me to go much faster.



Also, I have noticed: I seem to be able to swipe up or down much more easily than left or right.  In part because I often switch the hand I am using the phone with: thumb swiping up works easily with either hand, but swiping left or right is more difficult no matter what, and often much more difficult in a non-preferred direction.



(I believe that swiping up was one of the big reasons that I found Flipboard https://flipboard.com/ so pleasant to use when it first came out.  And Flipboard isn still pleasant to use in terms of swiping - it is just horribly full of ads and crashes my iPhone.)







I think this "card" interface is becoming a BKM (Best Known Method).  I wish that it were integrated with standard email apps, like Inbox or Outlook for iPhone.  Cards are just an alternate interface or "view".  Most email apps have several views: (1) the message list, w/wo previews, possibly sorted.  For Inbox, some other folder, or a search. (2) a full message view. (3) Now "cards".

     (I really think that a "folder tree view" should be included, although most email readers have really, really, sucky tree viewers.)







I love Zero enough that I paid 5.99$ for an upgrade to unlimited accounts within an hour or starting to use it.  But...



Zero is free for the first account, 1.99$ for the second, and 5.99$ for unlimited.



I hoped that I could be able to use Zero for my work email - I seem to remember that Triage could access our work server, using Imap (hence requiring VPN).  But this seems not to fly.



So, I can only really use Zero for my personal email, not at work.  I won't begrudge them the 6$ - I like supporting software that is useful.  I wish them luck.









This situation is becoming more and more common:  email is something people spend a lot of time on.  There are many good ideas bubbling up - cards, swiping, AI-like autoprioritization.   But no single app has all of the good ideas.  



So we end up using multiple apps. Email widget apps, that do one thing well, but where you have to use other apps for other things. And just cross our fingers that the apps play together well.



Zero seems to use Gmail's standard Archive feature. And Zero's "mailfeed" seems to be just "Inbox and Unread" - i.e. Zero's concepts seem to map to standard Gmail concepts.   I hope it will not interfere.



As far as I know,  there is no standard feature for "Defer email to process later at a scheduled time".  Different mailreaders handle this in different ways: labels, special folders.  







And this is a common situation wrt security: typically, not being able to access corporate mailservers using your favorite mobile app.  Being restricted to a much smaller number of limited, less innovative, email apps, like Microsft's Outlook app or Apple's standard Mail.app.



I understand corporate IT's unwillingness to allow random apps access to the corporate mailservers.  Especially when those apps often process the email on the app company's servers - even if they put it back on your employer's mailservers.



I just wish that there was a better way: a way to get innovative features, like Zero and Triage's cards, on my work email.



I suppose monetization is part of the problem.











More and more, I find that the technology I use to handle email at work sucks compared to what I use outside work. Not only is it slow, but it is so much less pleasant that I am averse to reading work email using the primitive apps.









Zero - smart and secure email client for your inbox on the App Store



'via Blog this

Tuesday, September 08, 2015

Version Control of Crash Recovery Cruft



I have developed a system whereby I archive the "cruft" created by a crash into a tar file

TAR.archived-framemaker-cruft.tar



so that I can recover autosave files if necessary, but so that they do not clutter my directory.





I just added a quickie to count the number of crashes saved, or at least an approximation thereof.







This is one of those "duh, why didn't I do this years ago" sorts of things.



EVERYTHING needs version control.



Even crash recovery files.



It sure is nice to have multiple versions of the same crash recovery file(name) , rather than having to rename them autosave.1 autosave.2, ...  (Most important thing is to be able to name things. Second most important thing is to not have to name things.)







    CRUFT_FILES= $(FM_CRUFT_FILES) $(PPT_CRUFT_FILES)    FM_CRUFT_FILES= *.auto.fm *.backup.fm *.auto.book *.backup.book *.recover.fm *.recover.book    PPT_CRUFT_FILES= *Autosaved*.pptx    .PHONY: save-cruft    save-cruft: archive-all-autosave-files    .PHONY: archive-all-autosave-files    archive-all-autosave-files:    -echo $(PPT_CRUFT_FILES) | gtar -uf TAR.archived framemaker-cruft.tar    -rm $(PPT_CRUFT_FILES)    -echo $(FM_CRUFT_FILES) | gtar -uf TAR.archived-framemaker-cruft.tar    -rm $(FM_CRUFT_FILES)    make unlock-all-framemaker-files
    .PHONY: ls-cruft    ls-cruft:    ls $(CRUFT_FILES)
    .PHONY: count-crashes    count-crashes:    .PHONY: count-saved-crash-cruft    count-saved-crash-cruft:    gtar tvf TA*tar | perl -p -e 's/^([^ \t]+[ \t]+){3}([^ \t]+ [^ \t]+).*/\2/' | sort -u | tee >(echo "Unique timestamps (crash points in archive):" `gwc --lines`)











Sunday, September 06, 2015

Who says Mac OS X does not need uninstallers

I have heard people say that Windows needs uninstallers for DSW because of design flaws, whereas Mac OS X does not need uninstallers because of its perfect UNIX-ish design.



False!



Evidence: uninstalling the "Basis Sync.app", now that I am no longer using my Basis B1 watch (R.I.P.).



The web page link says, paraphrasing: "To uninstall move the 'Basis Sync.app' from the /Applications folder to Trash.  Also, go and remove support files from ~/Library/Application Support/Basis Sync."



The two separate steps would be encapsulated by an uninstaller.



But furthermore:  trying to move the 'Basis Sync.app' fails, because the app is running - I was starting it by default.  (I guess Apple has put in a Windows like interlock; normal UNIX would blindly unlink a running program.)  Option-Command-Escape does not show 'Basis Sync.app' as running.  However, standard UNIX ps does, and I can kill it, and then remove the app.



I know how to do this.  But how many non-UNIX users would?   My wife?  My mother?



Automating - creating an uninstaller - forces the software vendor to really know how to uninstall.

Wednesday, September 02, 2015

Watch and fitness band

After the Withings Activite' Pop failed to satisfy, I have reverted to a slightly updated version of what I have been wearing for the past year: a semi-smart watch on one wrist, and a fitness tracker wristband on the other.

Before their watery demise, I was wearing the Basis B1 watch, and a Jawbone UP.   I bought the Basis watch first. I later bought the Jawbone UP, at first just to get vibrating alarms - but I liked the UP iPhone app so much more that it became my main fitness tracker.   I mainly used the Basis as a watch, to tell the time, and because it had a display so that I could see how many steps I had done without opening my phone.

Now I am wearing a Pebble Smartwatch, and a Jawbone UP2.

The UP2 because I continue to prefer its app, and for sleep monitoring.  The Jawbone UP was able to tell whether I was awake or asleep automatically, but only the Misfit watchapp can do that on the Pebble - and it interferes with other watchapps, like the Jawbone UP watchapp, which actually counts steps. 

Interference between watchapps is the bugbear of the Pebble.

I was therefore a bit disappointed to learn that the UP2 has separate modes for sleep and awake.   Automatically detecting is the entire reason I got the UP2.

The Pebble SmartWatch  (a) because running the Jawbone UP watchapp displays steps taken in almost real time - i.e. because it has a display.   (b)_Swimming.  (c)_Other smartwatch goodness: (c.1) calendar (via SmartWatch Pro), (c.2)_notifications about email and text (mixed blessing, needs better filtering), (c.3)_bus and train schedule. (d) Because I can write my own watchapps, in my own copious free time. 

I can already feel the gravitational pull:  I am frustrated when something that could fit in the smartwatch's limited form factor is not available as a watchapp. Just as I prefer to use my cellphone for most things that fit on it, I think the same will apply to the watch.   The cellphone just has a bigger display, and the MacBook an even bigger display, and a big keyboard.

If we had goggles...
 

Tuesday, September 01, 2015

Development Platforms for MIPS (esp MIPS64)

I have often been asked by my hacker friends "Where do I get a MIPS based system to play with?"  As in home, personal, hacking and experimentation. SW and/or HW.



Apart from saying "Well, your home router probably has a MIPS CPU" or "Install OpenWRT or the like on a home router" I have never really had a good answer for my friends.



In particular, I did not know where to find a MIPS64 development system - to recommend to my friends, but also for my playing around with at home.



I just found the webpage of MIPS development systems here, and in particular the Ubiqiti Edgemac ERlite-3.  With its big brothers in the Ubiqiti EdgeRouter family (datasheet), the ER-8 and the ERpro-8, ranging from



2 MIPS64 cores (from Cavium) @ 500 MHz

512MB DRAM / 2GB flash / 4 ports



to



2 MIPS64 cores @ 1 GHz

2G DRAM / 4GB flash / 8 ports


with prices from circa 92$ to 345$ on Amazon.







I was about to complain about the relatively small amounts of DRAM and Flash for 64-bit OS development.  (Although, a lot more than most routers have.)



But: this is MIPS.  SW TLB miss handling is the norm.  If you can memory map the flash, you could make the flash part of the memory hierarchy beyond DRAM.  I.e. treat DRAM as a cache for flash, write absorbing.



Heck, this could be a fun system for ASPLOS type OS hacking, designing alternative page table structures for OSes, virtual machines, etc.



Attach a NAS to the ethernet ports, and away you go.





A very nice system for SW hacking, especially if you hack at the virtual memory level.


Unclear how much hardware hacking you can do.







Daddy knows what he wants for Christmas.




Monday, August 31, 2015

Now trying Pebble as a fitness (swim) monitor

After my disappointment with the Withings Activite' Pop, described in previous blog entry, I am going to try the Pebble smartwatch.



Actually, I was about to give up and reproduce my old configuration - a Basis watch, *AND* a Jawbone UP wristband - but I realized that I could get both a Jawbone UP2 and an original Pebble for the price of the current Basis Peak model.



Plus, the Pebble is the only other relatively inexpensive fitness watch that supposedly can handle both swimming and walking. After the Withings Activite' Pop, the next stop seems to be Garmin triathlon devices >> 200$.



Plus++, the Pebble supposedly has much other goodness: downloadable apps, a free SDK so that I can try writing my own, etc.







Performing a Force Sync & Simulated Workout (Pebble) – Swim.com:



'via Blog this'

Thursday, August 27, 2015

My new fitness watch: Withings Activite Pop

I just got my new fitness watch: the Withings Activite' Pop.

It replaces the Basis B1 watch and Jawbone UP that I have been wearing together, one on each wrist. I got the Basis in October 2013, and the Jawbone UP in November 2014. Replacing them because they died after being soaked in rinse water with my wetsuit. :-(

I considered getting a new Basis watch and a new Jawbone UP, since the combination worked well, except for the large UP being a bit tight.

But I decided to switch to the Withings Activite Pop, mainly because it was one of the few activity monitors that can automatically track swimming as well as walking.

This blog to track experience.

Ubiqiti EdgeMax ERLite-3 - Imagination Community

Ubiqiti EdgeMax ERLite-3 - Imagination Community:



'via Blog this'



Looks like this may be a fun and fairly inexpensive way to do 64-bit RISC hacking.  (MIPS64.)



Plus, if it doesn't work out, it's still a good router, recommended by the bufferbloat guys.




Wednesday, August 26, 2015

Comcast home internet unusably slow (typically afternoons)

Yesterday and today I have worked from home rather than going into the office.



Both days my internet connection has become unusably slow sometime in the afternoon. This has happened before: I used to joke that it happened "When the kids got home from school."  But the kids are still on summer vacation.  Still, it could be "When the kids get off from day camp", or "When the stockbroker down the street stops trading for the day".



This afternoon slowness has happened many times before - so much so that I fell into the habit of driving into the office rather than working from home.  I had forgotten that this was the reason I was not working from home - I was wondering why I was not using my treadmill desk, which I love, as much as I would like (because I prefer to read email in the afternoon on my treadmill desk - and if my Comcast internet is unusably slow, then I try to be in the office in the afternoon, and hence do not use my treadmill desk).



I have not hitherto investigated this problem in detail.  Apart from saying "This is probably bufferbloat", but never getting around to installing modern OpenWRT, with bufferbloat mitigation, on my routers (because my first attempts to install CeroWRT/OpenWRT/DDWRT failed, possibly locked routers).



This blog item to hold notes related to investigating this problem.  In public, because it is unlikely to hurt, may embarrass Comcast into fixing the problem, etc.


Tuesday, August 25, 2015

Bluetoooth Stereo Headphones: Kinivo 240 & 220 neat SoundBot SB20 and Jarv Joggerz BT301

I depend on a headphone/headset/mike in order to participate in phone meetings.



I find that I cannot use earbuds - they hurt my ears, and often cause ear infections.



I prefer on-ear headphones. (Or over-the-ear, or around-the-ear.)



I used mainly to use wired headphones, but I had pretty good luck with a pair of Kinivo BTH240 bluetooth headphones.  When I mislaid them, I decided to do a bit of comparison. I have tried:



Kinivo BTH240 (29.99$ for a "Limited Edition" bright blue set; the set that I purchased in December cost me 24.99$, which seems to be a typical price.)



Kinivo BTH220 (13.99$)



Soundbot SB240 (14.99$)



Jarv Joggerz BT301 (17.99$)







BOTTOM LINE:   the Kinivo headsets 240 are best, mainly because they have a greater range of volume - they can be cranked louder than the Soundbot or the Jarv Joggerz.



The BTH220 differs from the BTH240 mainly in that the 220 uses a mini-USB connector, while the 240 uses a micro-USB connector.  The 220 also has less battery life.



One might think that the Kinivo BTH240 and the SoundBot SB240 would be similar - the common number "240" suggests they use the same silicon.   Physically, I prefer the Soundbot - it is easier to find the buttons to turn volume iup or down, and/or go forward or back on a podcast.   But, more important is that the Kinivo BTH240 can play louder than the Soundbot SB240.



Volume matters to me, because I listen to podscasts played from my phone to these headsets, typically while walking in the woods.  When walking near a moderately loud road, the Kinivo BTH240 volume can be turned up loud enough so that I can hear, while the Soundbot SB240 is drowned out by the road noise.



I do not know if the difference in loudness is due to physical construction, or due to electronics.   I suspect the latter.



The Jarv Joggerz BT301 was a hopeful acquisition:  the Kinivo and SoundBots all have fairly big bars connecting the  earmuffs, which make it difficult to do floor work like stretches and situps.  The Joggerz has a much lighter wire that sits closer to the skin.  Indeed, the Joggerz can be used to do situps: but overall the Joggerz is almost inaudible at my healthclub with its pounding music.  Moreover, the Joggerz controls are much harder to work.



So that's it: the Kinivo BTH240 is my preference, the headset that I try to use in the gym or outside.  The others I keep as backup, at my office, or in my home office.



As for colors: I mainly don't care, but there is some small utility value in having different colors, to make it easier to keep track of which is charged or which belongs in my gymbag.








Saturday, June 06, 2015

Basis B1 won't sync to MacBook / iPhone

I landed in Atlanta an hour ago.



Since that time, I have been trying to sync my Basis B1 activity monitor  watch.  If for no other reason than to get the watch set to the east coast timezone.



Slowness and eventual failure. Repeatedly.



One big complaint I have about iOS devices in general is that they seem to have much poorer background operation than Android.   For example, on Android my Basis watch would sync without me having to open the application.  Worse, I seem to have to babysit it to ensure that the iPhone does not go into a powersave mode.



I do not know if this is fundamental to iOS, or just a simple matter of programming.   Certainly, many iOS apps & devices behave this way - they operate only when in foreground.



This reminds me of old MacOS, with "cooperative multitasking", rather than fully preemptive scheduling.



This may be a compromise to save battery life. Certainly my iPhone seems to have better life than a Samsung with comparable battery.  But that may be the fault of Samsung bloatware, rather than Android itself.



--



This iOS characteristic does not let Basis off the hook.



The Basis consistently fails at 49% transferred.   At one point I saw a "low memory" warning on the watch.   Low memory when sync'ing - surely they reserve a minimum buffer size to finish a transfer.?




Thursday, June 04, 2015

Making the Outlook Ribbon visible

Making the Outlook Ribbon visible

'via Blog this

For the last few
days I somehow managed to hide the commands on the Outlook "ribbon" -
the dynamic set of menus etc. at top of screen.


Today I finally got
pissed off and googled.

Here's the fix:

Look for the little
arrow like box at upper right.




Microsoft's help
pages were deceptive.


First, the page
title is "Minimize the Ribbon".
Whereas I wanted to restore the ribbon.
Google works better than MS search to find the "Restore the
Ribbon" section title on the same page.




Second, the
graphical widget they suggest
i.e.


Is not what I see on
my screen. 

Instead I see


I do not know if
this is a difference in software versions, or due to Parallels' Coherence
window management stuff.  It appears this
way even when Coherence is disabled.




Overall, Microsoft's
attitude seems to be that you would only do this deliberately.

Whereas I find it
far too easy to do this by accident.
The keyboard shortcut, ctrl-F1, is easy to type by accident.

----

Oh joy, I see that the screen clippings that I recorded in Microsoft OneNote  (native to Mac) to show the problem do not cut and paste into Google Blogger.  Screw wasting time trying to fix that.

The words provide a clue, although the pictures would be even more helpful to anyone else with the same problem.

And the blank boxes - if they post - are a sad commentary on the state of data exchange.

---

(Outlook (Office15), on Windows 8.1, running under Parallels on a MacBook.)


Thursday, May 28, 2015

Using Outlook/Exchange email faster on IOS/Android than on Windows!!!!

I just sent email to my coworkers saying that I am about to give up on using Outlook on my PC (currently Windows 8), in favor of reading all of my company email on my iPad.

Why?

a) in part because my company's IT department requires us to use VPN to access our Exchange servers, requiring me to type in extra passwords and adding connection delays when the VPN connection times out due to inactivity - whereas they allow mail reader apps on cell phones and tablets, both Android and iPhone, to access without VPN, using ActiveSync.

(Note: I know that IT can configure Exchange to be accessed across the net without VPN - my company did so before it was acquired.  But apparently our IT department does not trust that, whereas it was compelled to provide ActiveSynch for mobile devices.
     I also have at least one email reader on my iPhone - Triage - that requires VPN.  A big reason that I do not use that email reader any more.
     Apparently Microsoft does not allow PC mail readers to use ActiveSync.)

b) also because of a recent Outlook problem, whereas it is unable to connect to the Exchange server until 15-30 minutes has passed - unless I reboot.


This probably is not big news to Microsoft - but this is something that Microsoft MUST fix, if they want people to continue using PCs.

All things being equal, I would prefer to read my email on my PC laptop, with its big screen, trackball, etc.  (Let's leave aside the question of Linux/Windows or which mail reader - Outlook or Thunderbird or ...  - since AFAIK all PC or Mac mail readers have the same problem, in that they would have to go through VPN to access the Exchange servers, whether as native Exchange or iMAP.) 

(Hmm, is there an ActiveSync mail reader that can work on a Linux PC?   Or is ActiveSync non-open software?)

But if using my PC is so much slower than using my iPad - well, that's one less Microsoft customer.

I have been considering a Microsoft Surface Pro or similar Windows 10 machine as a convertible tablet - capable of doing real work using PowerPoint and Excel.  But AFAIK the Surface Pro will suffer the above mentioned problems that slow down my PC email.   If it is slower than an iPad or Android tablet at reading email....   one less Microsoft customer.

Thursday, May 07, 2015

https vs http - why not signed but not encrypted https?


From: Andy Glew
Newsgroups: grc.securitynow
Subject: https vs http - why not signed but not encrypted https?
X-Draft-From: ("nntp+news.grc.com:grc.securitynow")
Gcc: nnfolder+archive:sent.2015-05
Date: Thu, 07 May 2015 11:57:35 -0700
Message-ID:
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.4 (darwin)
Cancel-Lock: sha1:3QSNHoOOsLInTT2t9aCFbf/tYoY=

(New user in grc.securitynow. Longtime podcast listener. Very long time
ago USEnet user (not so much nowadays).  My apologies if this is a FAQ.)

OK, so there's a trend to encrypt all traffic - to use https, to
discourage http.  If for no other reason than to make man-in-the-middle
attacks harder.

One of the big losses is caching: the ability for somebody like a school
in a bandwidth deprived part of the world (like Africa, now; like
parts of Canada, when I grew up, although no longer so true) to cache
read-only pages that are used by many people.   Like the website I used
to run, and which I hope to bring back up sometime soon - a hobbyist
website for computer architects.  No ads.  No dynamic content.

Heck, like this newsgroup would be, if it were presented as webpages.

HTTPS encryption, with a different key for each session, means that you
can't cache. Right?



Q: is there - or why isn't there - an HTTPS-like protocol where the
server signs the data, but where the data is not encrypted?

(I thought at first that the null cipher suite in HTTPS / TLS was that,
but apparently not so.)

Having the server sign the data would prevent man-in-the-middle
injection attacks.

An HTTPS-like handshake would be needed to perform the initial
authentication, verifying that the server is accessible via a chain of
trust from a CA you trust.  (Bzztt.... but I won't rant about web of
trust and CA proliferation.)



Possibly you might want to encrypt the traffic from user to server,
but only sign the traffic from server to user.



So, why isn't this done?


It seems to me it would solve the "HTTPS means no caching" problem.




OK, possibly I can answer part of my own question: signing uses the
expensive public key cryptography on each and every item that you might want to
sign.  Whereas encryption uses relatively cheaper bulk encryption,
typically symmetric key protocols like AES.

Signing every TCP/IP packet might have been too expensive back in the early days
of the web. Not to mention issues such as packet fragmentation and recombining.

But note that I say "each and every item that you want to sign".
Perhaps you don't need to sign every packet.  Perhaps you might only
sign every webpage.  Or every chunk of N-kiB in a web page.

A browser might not want to start building a webpage for display until
it has verified the signature of the entire thing.   This would get in
the way of some of the nice incremental fast rendering approaches.

But, perhaps the browser can incrementally render, just not enable
Javascript until the signature has been verified?   Or not allow such
Javascript to make outgoing requests?   I am a computer architect: CPU
hardware speculatively executes code befopre we know it is correct, and
cancels it if not verified.  Why shouldn't web browsers do the same?

I.e. I don't think latency of rendering should be an obstacle to having
cacheable, signed but not encrypted, HTTPS-like communication.

Probably the plain old computational expense would be the main
obstacle. I remember when just handling the PKI involved in opening an
SSL connection was a challenge for servers. (IIRC it was almost never a
challenge for clients, except when they opened too many channels to try
to be more parallel.)  What I propose would be
even more.



But:

(1) CPUs are much faster nowadays.  Would this still really be a
problem?

+ I'm a computer architect - I *love* it when people want new
computationally demanding things.  Especially if I can use CPU
performance (or GPU, or hardware accelerator) performance, which is
relatively cheap, to provide something with social value, like saving
bandwidth in bandwidth challenged areas of the world (like Africa - or,
heck, perhaps one day whden the web spans the solar system).

(2) Enabling caching (or, rather, keeping caching alive) saves power -
now I mean power in the real, Watt-hours, sense, while requiring
signatures and verifying them consumes CPU cycles.   I am not sure that
the tradeoff prohibits what I propose.

Wednesday, May 06, 2015

MacBook doesn't like surprises: UCB and display problems

Effing MacBook:



As usual, when I disconnect at home and come to work, I waste 15-30 minutes trying to get my displays and keyboard and trackball working.  If it was always the same problem I might have figured out a workaround - but the problem changes in a minor way.



Today, the problem was that when I plugged in at work, my external monitors worked, but my laptop LCD monitor was not working.  Black.  Not reported by System Preferences => Displays.



(Usually it is one or both of the external monitors that do  not work. And/or the USB keyboard and trackball.  But today it is different.)



Various attempts to fix, like unplugging the external monitors, going to sleep, etc., do not help.  Unplugging external monitors and going to sleep  / waking up[ => a laptop with a blank screen.



So, as usual, I rebooted.



At the moment, I am rebooting my MacBook once or twice a day.  Usual;l;y to try to get a display to work, or to get my USB keyboard and trackball working.



Note that I say "both" external displays above.   I only have two external displays, a 30" HDMI, and a USB BVU195 display adapter.   On Windows I used to have 3 or 4 external displays, as well as my laptop LCD.  Not so on the MacBoo.



I expected the Mac to handle multiple displays better that Windows?  Not in my experience! :-(



(MacBook Pro 15" retina Mid-2014)



---



I wonder if these problems are related to "surprise" disconnects - disconnecting from USB and external monitors without doing something to Mac OS-X first.   Windows used to have problems with such surprise disconnects circa 2000, perhaps Macs are just behind.  But I can't find any way to tell MacOS "I am about to disconnect you now".




Thursday, April 30, 2015

The Grammarphobia Blog: Why is “m” a symbol for slope?

The Grammarphobia Blog: Why is “m” a symbol for slope?: "In Mathematical Circles Revisited (2003), the math historian Howard W. Eves suggests that it doesn’t matter why “m” has come to represent slope.

“When lecturing before an analytic geometry class during the early part of the course,” he writes, “one may say: ‘We designate the slope of a line by m, because the word slope starts with the letter m; I know of no better reason.’ ”"



'via Blog this'

Wednesday, March 11, 2015

Buying a MacBook was a big mistake!!! :-(

For the New Year, I bought myself a MacBook - a MacBook Retina Pro.

My old Lenovo X220 convertible tablet PC was dying - SSD errors everywhere (SpinRite fixed, but I could not trust the SSD any more) I could have replaced the SSD, but chunks of the plastic case were falling off, the USB 3 port had long since failed...

Plus, although I loved my pen computer, I had started using an iPad (my daughter's, although she was not using it), so my need for portable web browsing and email on a screen larger than my phablet was being better met --- while my need for a portable computer with lots of pixels that was convenient to do real work on was not being met.  The 1366x768 pixels are not enough - even my iPad mini has more resolution.

Plus, my daughter uses a MacBook, since her old school insisted on Apple.  I wish that she would switch to something cheaper, like Windows or Linux on a Wintel laptop - but she is happy.  And I figured that I would be able to help her more if I were more familiar with the MacBook, through everyday use.

Plus, I just upgraded my wife's and my daughter's iPhones, and I switched myself from Android to iPhone. Plus the iPad I mentioned above. The big reason for this was TouchID. Fingerprints are the Next Killer App. (I suspect that this is the real reason for the jump in Apple's sales for Xmas 2014.)

So I figured: Why not be 100% Apple and buy a MacBook?

Big mistake.

So I bought a high end MacBook Pro Retina, 2880x1800, 1TB disk.

(Actually, despite all the reasons for considering Apple above, I might have purchased a good Windows PC with a retina level display - except that at the time I could not find one with a 1TB SSD from the manufacturer.  And I had just been burned by buying an aftermarket SSD from Crucial, that turned out to have major loss-of-data problems.)

Big mistake.

Now, the MacBook is a pleasant machine.  Sure, there are annoyances like the Mac's option and command rather than control and alt - but you can get used to those.  If I could get away with only using Mac applications.

Unfortunately, I have to use some Windows applications. And/or Microsoft applications running on MacOS.

For example, I am writing this plaintive blog entry because the MacOS native Microsoft PowerPoint.app, is hung. Again.  When I kill it, it hangs again.  I have had to reboot my MacBook twice today to clear this problem.

OK, so maybe the fault is Microsoft's rather than Apple's.  Nevertheless, since I had fewer problems on Windows, it's a ding against buying my MacBook. 

Perhaps I should use non-Microsoft tools, like Apple's Keynote.app to prepare slides? Or something Open Source? Sure... but I have to be able to exchange .PPT files.  And the non-MS apps often produce .PPT that is broken, that is not WISIWTG  (What I See Is What They Get). Sometimes they cannot open the files.  And there are features missing.

FrameMaker was a biggie.  I need to use FrameMaker, an  obsolete techpubs app.  No longer available on UNIX, only Windows.  Attempting to use obsolete Version 7 FrameMaker on an old SUN SPARC remotely across a slow network wa painfully slow. So I transferred my PC Framemaker license to Windows 8 running on Parallels on my MacBook.   That was one of the big reasons to buy a 1TB SSD - I had filled up my old 512GB SSD with just one OS, and now I had to install two.

And this works acceptably well.  I can use FrameMaker on my Mac.

But...  using a virtual machine environment like Parallels is a pain.  Now I have two OSes to maintain: two OSes that must be updated regularly.  Twice as many reasons to reboot.  Sure, if I am rebooting the Windows Guest I can continue to use MacOS - but not vice versa.

My original plan had been to try to only use FrameMaker on Windows under Parallels, and use native MacOS apps for everything else.  My Office license gave me access to the native MacOS versions of all of the Microsoft apps I use.

Except... they are all a bit off. A bit lacking.  E.g. conversation mode doesn't really work in native MacOS Outlook.app.  So I started using Outlook under Parallels.   But now if I click on a link, it starts Internet Explorer.  Gack!  So I have to install Chrome inside Parallels, as well as Chrome on MacOS...

Eventually I have ended up with both MacOS and Windows versions of most of the apps I use installed.

And now things get really confusing.  I must say ctl-C under Parallels, and cmd-C in MacOS.  Now, which version of the app am I using?   If you type the wrong keystroke at the wrong app, crazy things happen.  (Remapping the modifier keys is a slippery slope...)

The only way I can stay sane is to try, as much as possible, to only use the more functional Windows versions of apps. It's still confusing when I have MacOS apps versus Windows.

Tell me again: if I am mainly using Microsoft apps under Parallels virtual machine, why did I buy a Mac again?

Yes: I insist on having UNIX-like commands.   But Cygwin gives me most open source UNIX commands and Microsoft/Windows apps, with a lot less hassle than using MacOS for UNIX-like commands and Parallels for Windows apps.

(Perhaps things would be better with Linux as the host and Windows running in Xen.  Linux and Windows have more similar user interface behavior than Windows and Mac.)

And then there are the generic Mac shortcomings:

I hoped and expected that MacOS, being beloved of artists and graphics folk, would support multiple monitors well.  BZZTTTTT!!!!  On my dinky little Lenovo I used to drive 3 external monitors: a 30" 2560x1600, and two 24" 1200x1920 to read full pages of books and papers.  My MacBook can only drive 1 of each - this would be totally unacceptable, except for the fact that the laptiop display itself is so nice.

Fingerprint: my old Lenovo had a fingerprint reader, no MacBook does (at this time).  This is especially annoying, since I am chanting "Fingerprints are the next Killer App", and bought an iPhone mainly to get TouchID.

LastPass does not work well in the Apple ecosystem.  I now have to type in my (long) password many more times a day than I used to.

MacOS apps usually do not come with uninstallers.  Supposedly they do not need uninstallers.  BZZZTTTT!!!!!   

Windows has some very useful user interface things - like bumping a window against the top or side to maximize. MacOS lacks these.   Add-ons like SizeUp help, but do not work for all apps.  Like, SizeUp does not work for EMACS, or for FrameMaker. My two most frequently used apps.

Overall, app behavior under the Window manager is much less consistent in MacOS.

MacOS has no equivalent of AutoHotkey. AppleScript comes close, but cannot do everything that AHK does.  (I had forgotten how many AHK shortcuts I used on Windows.  I can run AHK on Parallels - but then the shortcuts do not work everywhere.)

Did I complain about MacOS's lousy support for multiple, large, displays?

Like, you cannot move the notification area around.

Like, the Dock can only be at the bottom of a screen, or at the side of all - not at the side of any display, the way I can move the windows taskbar.

Windows spanning multiple displays on MacOS are awkward.  Not the default on Yosemite.  You can get them, but then you lose the Dock appearing on any display.

The whole basic Apple concept, dating back to Xerox, of a menu bar at the top of the screen, with several windows swimming in the display, is a big loss with a large display.  It's a LONG way on a 30" monitor, to move your mouse from the bottom right hand corner of a display to the menu bar at the top left hand corner. Would not be so bad if there were more keyboard shortcuts...  but Apple dislikes keyboard shortcuts, and no AHK equivalent.   No mouse warping.

Did I mention how expensive Apple hardware is?  PC hardware is about 33% less, if not half the price.

---

The list goes on and on.

After 2.5 months of using the MacBook, I like it less and less. 

It was probably a big mistake to buy a MacBook to run Parallels. It wopuld probably be better to run Windows with Cygwin. Perhaps Linux with Xen to run Windows, but even that is 2X the OS sysadmin work.

Perhaps one day I will not need to use Outlook, or PowerPoint, or Word.  Then, I think, MacOS might be worthwhile.  But not now.

Now, I am waiting for Windows 10 to be released.  If there is a retina class Windows 10 convertible, with touchscreen and pen and fingerprint, enough RAM and a 1TB SSD, I will switch. If I can afford to.  If buying the MacBook has not exhausted all o0f my computer budget for this year.

---

Now, the smaller list of what I like about the MacBook:

It does have a nice LCD.  (But so do many PCs nowadays.)

There is a large variety of interesting IMAP mail clients in the Apple MacOS app store. Some of them are almost as good as the iPhone and Android mail client apps.  There are far fewer of these on Windows.

Much of what I do nowadays is cloud based web apps.  These usually run okay on both MacOS and Windows.

But native apps on MacOS, or Windows apps under Parallels?  Better to be native Windows.