Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Thursday, December 21, 2017

WISH: Exception Coalescing Operator/Expressions, like Null Coalescing

BRIEF:



I might like exception coalescing expressions and operators, e.g.

result := expression1 ?if-exception expression2 ?if-exception expression3 ... 
result := expression1 ?if-exception(EXCEPTION_TYPE x) x 
along the lines of null-coalescing operators like Perl // or C# ??



DETAIL:



I very much like the null coalescing operators, like Perl's //

$a = $b // 'value if b is null/undef';
I suspect that I would also like the Elvis operator ?: if I could get away with using it (i.e. if I felt that my coworkers would not crucify me for using non-standard C), and various safe-traversal operators like ?. ?[] etc.



Today, writing some unit test code, I wanted to do simple exception handling, and I think that I realized that I might like to have an exception coalescing operator, or, more generally, try-catch style exception handling as an expression rather than a statement.



I wanted to do 

result := f(input) or "error" if an exception was thrown
which usually looks like

try {
     result := f(input);
     assert( result == expected );
} catch( ... ) {
     assert( 'according to whether exception is expected or not' );
possibly testing for particular exceptions.



Of course, I (or the test framework) usually have functions like

test_assert_equals( f(input), expected_value )
or

test_expect_exception( f(input), expected_exception )
or

f := if_not_exception( f(input), 'exception_value' ).
but that can be quite clumsy.  And, in non-test cases, you may want to chain

f := if(_not_exception( g( if_not_exception( f(input), 'exception_value' )), "exception2")
In general function(args) can be clumsy compared to prefix_fn(args).suffix_fn.suffix_fn2.





In this particular case, I think that I would like an exception coalescing operator

result := expression1 ?if-exception expression2
equivalent to

declare result
try {
     result := expression1;
} catch( ... ) {
     result := expression2;
}
return result
and chainable

result := expression1 ?if-exception expression2 ?if-exception expression3 ...
If we wanted to specify exactly what exception is caught

result := expression1 ?if-exception(EXCEPTION_TYPE x) x 
(although this raises the possibility of multifix vs infix binary)



Q: what should the operator be?   While I could live with ?if-exception - especially if in my oft-desired XML-based-programming-language-syntax - many folks would prefer something expressible as ASCII.  I would suggest ??, except that C# already uses that for null coalescing equivalent to Perl //.



?//



But if we can't have a nice set of hard to remember symbols, how about traditional try/catch as an expression?

result := try{ expression1 } catch(...)  {expression2 }
This happens a lot - similarity between expressions and statements.  In LISP everything is an expression. Python gets much of its readability by having a stricter boundary between expressions and statements. And debuggability - accidentally making something into an expression when it should be a statement can be hard to find.



Perhaps there should be expressions corresponding to all statements - bit with a minor syntactic indication.




















'via Blog this'

Sunday, December 17, 2017

Veggies for Breakfast - Henry’s PDX

VfB score: 90% (Veggies for Breakfast)

Many vegetable omelets have a lot of egg and not so much vegetable.

Not here!

This vegetable omelet has a hearty egg wrapper wrapped around a healthy serving if vegetables - a large amount of green and other sweet peppers, plus some onion and spinach.

Tasty even without hot sauce.

—-

VfB - Vegetables for Breakfast - my search for healthy food, protein and vegetables - for the most of important meal of the day.

Friday, December 01, 2017

2FA watch app please! not phone

Duo Mobile: Duo Security:



'via Blog this'



My new employer uses Duo's push two factor authentication.  This is good...



But oh my gosh do I wish that this 2FA app was a watch app and not a phone app!  I am coming to hate having to go find my phone...  For some reason I need to do this much more often than in my personal 2FA usage - probably because I spend more time at work on my PC than banking, etc.  And Gmail and LastPass 2FA persists across (some) reboots.



If my employer's 2FA was time based, I suppose that I could clone the TOTP to run on both my phone and my watch. Although I never trusted the TOTP implementation for my Pebble SmartWatch - I never trusted that the Pebble had good enough security, both inside the watch, and in the synched-ed to phone app.



MORAL: SmartWatches need good security.



I would hope that the Apple Watch has good security.  That's a large part of the reason I switched from Android to iPhone.  Now if only the Apple Watch had decent battery life, I would gladly switch.  Being able to use my watch as 2FA would almost be enough to justify paying for cell phone connectivity for the watch independent without going through the phone.



In my dreams, the smart watch would carry the SIM card, and
temporarily delegate it to whatever more battery endowed device it was close
to. Like phone, or tablet, or laptop.  The smart watch could/should be the
most secure device – more physically secure, or at least less likely to be left
behind. Passwords by clicking buttons or tapping. Potentially endowed with
biometrics like fingerprint, and heartrate continuous monitoring – redo slow
authentication when taken off.

Will NNs
for pleasant authentication – fingerprint, face, voice – fit in the SmartWatch
form factor and battery profile?  I am willing to relegate training to a
synched to SmartPhone app; I would prefer NOT to let biometric authentication
live in the cloud, although it is probably too late.  All of our faces are
belong to Google, and Facebook, and ….



Amusing idea;  passwords as a sequence of silly facial
expressions.  NNs to recognize you tapping your favorite song on the
phone.  Conscious control of heartrate or body temperature or galvanic
skin response for authentication. Perhaps this will be the route to direct neural interfaces, the incremental step that provides partial value.


---



My Apple watch owning friends are happy to rub it in that they have a watch app for Duo 2FA..



As are some of my Android Wear friends - except they say that too many clicks are required on the watch app, so they often pick up the phone.



For the Record, I am currently wearing a FitBit Blaze not-really-smart watch.  Mainly as a Fitness Tracker, and that mainly because of FitBit's social fitness challenges with my friends. I was very disappointed to realize that the Blaze provides transient calendar notifications, but does not actually store my calendar on the watch.   Although FitBit bought Pebble, it doesn't yet support Pebble-style watch apps.

Sunday, November 05, 2017

OneNote [[links]] not quite as good as wiki

How to Master Microsoft Office OneNote: "Automatically create new pages, linked together in a master list: If you’re working on a project that you know will require several pages of notes, here’s a killer shortcut: Type two square left brackets followed by the title of the first note you want to make and then type two right square brackets at the end. OneNote will instantly create a new note with that title and a link to it in your current note. " 
'via Blog this'


I am glad that OneNote accepts  the double square bracket syntax to create links to pages that do not exist yet - a syntax that is common in wikis.



It is unfortunate, however, that OneNote automatically creates the page as soon as the [[link]] is typed - rather than when dereferenced, as wikis do.  Doing it wiki-style allows you to search for pages that you want but which have not been created yet - whereas doing it OneNote style results in a lot of empty pages.

Thursday, September 28, 2017

Standalone smartwatch without smartphone: working out, including swimming

I just revived my fitness club membership, which I had let lapse. It reminded me of one reason why I want to be able to use a smartwatch without a smartphone nearby - one reason why I will probably buy an iPhone X, or at least the new Fitbit Ionic, in the next year: workouts.

I like going to the club and listening to podcasts while doing a variety of exercises. On elliptical and rowing machine a smartwatch slaved to a smartphone isn’t so bad, although it would be nice not to have to carry the phone around. But swimming makes having a waterproof smartwatch independent of the smartphone much more desirable. Locking phone away in a locker while swimming risks theft, while wearing phone in pool in a waterproof bag while swimming, quite literally, drags. Whereas a waterproof smartwatch by itself means that all I need to place in a locker are my shirt, shoes, and towel. Which I am much less afraid of getting stolen.

The main feature I want for exercise is the ability to play podcasts. Both the iPhone X and the Fitbit Ionic can store audio, music, on the watch, connecting to a new Bluetooth headset. The IPhone X almost certainly has better podcast players.

I was about to say “Unfortunately, Bluetooth doesn’t work well in a pool. So I may fall back to using my old waterproof iPod Nano with earplugs - and hence not need the smartwatch to play the podcasts”.  But there are quite a few Waterproof Bluetooth earbuds for swimmers, so that may not be an issue.

=> stand-alone smartwatch with music/podcast playing to Bluetooth waterproof earbuds while doing workouts including swimming. That will be what makes me buy an iPhone X or Fitbit Ionic.

If I can get the “Gentle Wake” app from my old Pebble Classic, all the better.

Exercise and alarm clock: two things smartphones have never been good enough at to make me happy.

Friday, September 01, 2017

Amtrak - Experience - Onboard - Journey with Wi-Fi

Amtrak - Experience - Onboard - Journey with Wi-Fi: "Amtrak offers free basic Wi-Fi service"
I enjoyed using wifi on train from Portland OR to Seattle yesterday. Got much work done.



Unfortunately, on the return trip today the wifi was ... bad.  Intermittent. Slow.  Frustrating.



(Furthermore, as I have noticed before, my Microsoft Surface Book has a habit of crashing when wifi is intermittent.)



---



Both trains were full.  Yesterday's left Portland circa 3pm.  Today's left Seattle circa 6pm.

Wednesday, August 16, 2017

Does Vision Research Drive Deep Learning Startups? | Chris Rowen | Pulse | LinkedIn

I think that I would like this conference format.  In preference to a multitrack conference where there are multiple papers I want to watch in the same timeslot.



Half of the reason to attend a conference in person is to observe the audience questions.



I really like the idea of poster sessions for all presented papers.





Does Vision Research Drive Deep Learning Startups? | Chris Rowen | Pulse | LinkedIn:

So many teams submit worthwhile papers [to CVPR]  that it has adopted a format to expose as many people to as many papers as possible. Roughly 2500 papers were submitted this year, of which only about 30% are accepted. Even so how can people absorb more than 750 papers? All the accepted papers get into poster sessions, and those sessions are unlike any poster session I’ve seen before. You often find two or three authors surrounded by a crowd of 25 or more, each explaining the gist of the talk over and over again for all comers. Some notable papers are given a chance to be shared a rapid-fire FOUR minute summary in one of many parallel tracks. And even smaller handful of papers gets much bigger exposure – a whole TWELVE minute slot, including Q&A!   
Remarkably the format does work – the short talks serve as useful teasers to draw people to the posters. A session of short talks gives a useful cross section of the key application problems and algorithmic methods across the whole field of computer vision. That broad sampling of the papers confirms the near-total domination of computer vision by deep learning methods. 


'via Blog this'

Sequences sunburst



Sequences sunburst:



'via Blog this'



I love data visualization. I love AI.



I want to love this visualization (of papers at the CVPR conference), but my wanna love is outweighed by my "I hate data viz that is too low bandwidth".



You can only "understand" this visualization interactively, by flying over the pie chart slices to see the labels. For somebody like me, who tends to absorb a whole picture all at once, this is much slower than presenting the piechart with labels or a legend.  Even if the legend is just color coded, because the labels don't fit.  But if the labels can be spatially attached, by proximity or by arrows, all the better.



I don't really have a photographic memory, but I have a cartoon-ographic spatial memory.  I remember pictures, usually highlighting the important areas. Sometimes my visual memory is augmented by time domain "zooming" into areas of interest.



This flyover graphic requires time domain sequential memory just for the first level absorption.  At the very least it is slower; but I suspect it also displaces the use of time domain memory for deeper understanding.







I am impelled to blog this because it is so bloody easy to make this visualization higher bandwidth.



It's a nested pie chart.  There is room for the labels for most slices of the innermost pie ring.  Even on the outermost pie ring.   I.e. without even changing the graphic most of the segments could be labelled, with interaction to dive into the segments too small for such trivial labelling.



And if you have the ability to explode sections - dynamically redraw - even more so.







I have probably made an enemy here, if this ever gets back to the author. (If you are the author, I would love talk to you.  If only to thank you for the raw data wrapped in this visualization.)



I have probably also dated myself, because video presentations are more and more the fashion.







Yes: I am the sort of person who hates watching videos, because I can read papers or slidesets faster than videos.   When I watch videos, I like fastplay and fast forward. I especially like video players that  recognize slide boundaries, so that I  can jump from slide to slide - and then only backtrack when it seems likely to have interesting discussion.



  • We can talk faster than we can type
  • But we can read faster than we can listen. Or watch video.


So, which is more important?


























Sunday, August 13, 2017

Commutative FP Addition and Multiplication

In computer arithmetic, floating point addition and multiplication cannot be associative, because of rounding.



I.e. (A+B)+C != A+(B+C)



But... FP add and multiply are frequently also not commutative, in floating point instruction sets.  Not fundamentally, not because of rounding, but because of NaN and other special value propagation.



I.e.

   FADD fd := fa + fb      !=       FADD fd := fb + fa    



because, if both operands are QNaNs, the instruction set may be defined to propagate the QNaN in the first operand.



This simple rule allows compiler control - but it breaks commutativity.





Losing commutativity has several downsides:



a) the instruction set cannot use operand order to provide extra information, essentially an extra opcode bit.



b) it means that a compiler cannot freely reverse operands.



Does not matter for

   FADD fd := fa + fb      !=       FADD fd := fb + fa    


But does matter for srcdst:



   FADD fa +=fb      !=       FADD fb += fa    


    





Other NaN propagation rules would support commutativity.



E.g. choose the NaN whose value is smallest, if the entire NaN is interpreted as an integer bitpattern.  Or largest. Signed or unsigned.



(This may be appropriate if something like a line number is encoded in the NaN.)



E.g. merge NaN operands in some way.


Tuesday, August 08, 2017

EverNote encrypts more flexibly than OneNote

Thinking about storing something semi-private in my OneNote  - nothing really important, just some medical appointments.



I've used both EverNote and OneBNote, but am currently using OneNote.



Comparisons: in this regard - encryption - EverNote seems to win. (Assuming the crypto is done properly)



Lifehacker Faceoff: OneNote vs. Evernote: "In OneNote, you can encrypt entire notebook sections, but that's only for the premium (paid) Microsoft Office versions. In Evernote, select text and right-click to encrypt it."
'via Blog this'

Saturday, August 05, 2017

Google Calendar - won't show old events

Google is missing an opportunity to be useful to people, end-users.

A calendar of future events naturally segues into a view of a journal of old events.

But I can't seem to persuade GCal to show me some events from 2011.

(Maybe it's there, and I can't find it: same diff)

Monday, July 17, 2017

ISO a decent PIM - I wish I had InfoCentral again!!

As I keep looking for a better PIM, a better to do list manager, a better cal...: " I keep looking for a better PIM, a better to do list manager, a better calendar"



'via Blog this'



The
closest I have ever come to a PIM that made me happy was InfoCentral on my Compaq Concerto running
Windows for Pen Computing. Circa 1994. Then later at UW Madison 1996-2000, on
another pen/tablet - I think a Toshiba or Epson?  I have had so many…

I have
blogged about this before: https://plus.google.com/+AndyGlew/posts/jDTJxgJ78F3

Possibly
a description of InfoCentral: http://www.macros.koenecke.us/InfoCentral/whyic.html

The first and foremost advantage of InfoCentral, of course, is its
linking technology. Using this, any object
be it a person, organization, event, task, file on disk, or a
custom-created object
can be
linked to any other object. Further, the links themselves are objects.

From <http://www.macros.koenecke.us/InfoCentral/whyic.html>



I loved
InfoCentral's flexible but semi-structured format.
  • There were obnnjects and
    connections between objects.
  • Connections were themselves
    objects, and could have properties
    • Connections could appear
      assymmetrically  objects to which
      they were attached: eg father/daughter
    • Connections could have date
      ranges.  E.g. you could record an
      old address, but mark it no longer valid for searching
  • Objects had types, fields
    • I liked this - sometimes
    • Although I often found it
      annoying, eg when non-European name formats did not match up to
      InfoCentral's templates
    • IIRC you could hide any
      empty fields - not sure aboyt that, but obvious
  • You could create subtrees at
    any point - I thought of it as "shaking" the tree.

I think
of InfoCentral as a network database.

But it
was fairly obvious that InfoCentral was implemented, or at least could be
implemented, as a set of relational tables cross linked to each other.
     The main reason I haven't reimplemented
this is that at the time RDBMSes were not that common.  But nowadays, with SQLite, much easier.

My main
problem with InfoCentral was that it could not handle pen or bitmaps. 
     But I worked around this by linking
InfoCentral to … I can't remember the name, Inkwriter? AHV?
The
software that I believe Microsoft acquired, and which I think may have evolved
into OneNote.

Mainly, I
used InfoCentral nodes as a superstructure - more than a table of contents, but
less that fully integrated - to the "digital paper" or
the-software-whose-name-I-cannot remember.

Drawback:
could not create a TODO list by hand in the open notes, and then have
InfoCentral know ab out it.

But
better than nothing.

The best
PIM setup I have had to date.


Why do I
no longer use InfoCentral/Ink?   I
stopped using pen computers for a few years - hard for pair programming. I
returned to pen computers in 2002 at AMD: one day In started writing the K10
Spec, got frustrated, and at lunchtime I went out and bought a tablwet PC, a
Toshiba P4400!

But I
think that InfoCentral and this Inkwriter?? Were no longer available.

IIRC
Microsoft bought the Inkwriter?? Company, and closed it down.  MS may have made either InfoCentral or
inkwriter?? Availble freely, but binary, not source.


Why don't
I revive InfoCentral?
  • I should
    • I think that I am egging
      myself on to do this uin this note.
  • But I need the digital paper
    software
IMHO
InfoCentral is fairly straightforward

Why don't
I revive Inkwriter??
  • Much more UI stuff.
    Autoplacement.
  • If I could get OneNote
    convenient for starting single files or linking back…
  • Years ago, UNIXes had little
    pen/touch support.


It would
be fun to write myself - but it will take a long time to get to a usable placed




I really
do just wish it was out there in a form that I could buy.

Saturday, July 08, 2017

Waterproofed Fitbit Blaze

Waterproofed Fitbit Blaze: "Waterproofed Fitbit Blaze"

'via Blog this'

I just googled "waterproof Fitbit watch", since I have been waiting for the Fitbit Blaze replacement that is supposedly swimmable. Search turned up WaterFi, a company that does aftermarket waterproofing. 

WaterFi has a good rep, both on web and by personal experience - I have a waterfi'ed iPod Shuffle.

WaterFi sells a number of aftermarket waterproofed trackers, ranging from Fitbit Charge and Charge 2 HR to Fitbit Blaze.

They don't change the software, so no "swim mode". But reports say that they count "strokes" as "steps", which is good enough. 

The waterproofing closes up the air pressure port, so the altimeter "flights climbed" feature breaks. Minor sadness: living on a hillside, vertical is a useful exercise metric, correlates to intensity; but Fitbit does not do much with vertical, doesn't use for challenges, etc.

329$ for the large waterproofed Blaze, 300$ small, vs 199$ list, sometimes 150$ on sale. Since WaterFi offers a 99$ waterproofing service, it might be more cost effective to buy from FitBit or a reseller, and then send to WaterFi.

I am especially interested in the WaterFi'ed Fitbit Blaze, which has standard watchstraps (22/23/24mm?b - many for sale on Amazon) . "Leaked" photos of the (late, not yet shipping) new Fitbit watch seem to show that it does not use standard watchstraps.


---+ WaterFi'ed iPod Shuffle

A few years ago I bought a WaterFi-ed swimmable iPod Shuffle, which I used to listen to podcasts while swimming for quite a while. Notes: music fine; podcasts okay while doing breaststroke, but hard to hear doing crawl, and always miss something during tumble turns.) Still have, still works; I only stopped using (a) swimmer's knee (breaststroke), (b) got Fitbit, stopped swimming (c) it's a pain to sync podcasts to - modern podcatchers don't seem to handle non-phone devices like iPods.  But I still use occasionally, when I have a lot of podcasts or audiobooks to upload, amortizing the hassle of doing so.

Prompts thoughts: one of the things that attracts me about an Apple Watch is that it can supposedly play podcasts from the watch even when not carrying phone.  But I'll bet this doesn't work in the water (?)

Friday, June 23, 2017

ISCA 2017: " Ignore the warnings about the certificate."

ISCA 2017: " Ignore the warnings about the certificate."
It seems wrong that the webpage for ISCA, the International Symposium on Computer Architecture, which has two sessions on security and also includes a workshop on Hardware and Architectural Support for Security and Privacy, cannot do certificate based security properly.



I haven't looked at the certs, but I understand why: creating a cert for a subdomain which such a relatively short active life is a pain. Yes, even with the EFFs free certs.  (Note that conference websites often survive years, sometimes decades - but nobody gets credit for securing such a site after the conference ends. (Hmm, maybe bad guys who want to attack the sorts of people who go to such conferences should make such stale sites infectious.) )



Still: computer architects  should know enough to get valid certs.



And I wish that the world was more comfortable in allowing domain owners to issue signing certs fir subdomains, and so on.

Tuesday, June 06, 2017

(D)VCS branching models: notes in progress

I like branches.  But I don't like YOUR branches

I don't like git's branches. But then again, I don't really like Mercurial's branches. Or Bazaar's branches. Or Perforce/SVN/CVS/RCS branches. I may be polluted: I used CVS branches extensively, back in the day. Heck, I used RCS branches in RCS-wrapper-tools. I've used Mercurial and Bazaar branche extensively.   I like what Brad Appleton has written about branches in
Streamed Lines: Branching Patterns for Parallel Software Development - but then again, Brad and friends are really talked about streams of software development, not branches.  This may really be the problem: several different concepts use the same underlying mechanism.

This post is work in progress. I want to make some notes about branches, often in reaction to statements in other webpages. I will try to properly reference those webpages - but I am more interested in evolving the ideas than being properly academic.

Why I am writing this

1) Writing stuff like this helps me understand the differences between tools, and adapt my work style to new tools.  Although I have been using git for more than 10 years, it has only become my primary VCS recently for personal stuff - and I have to use Perforce at work. Most recently I mainly used Bazarr for personal stuff, but bzr is declining. Mercurial for some projects at work.  Plus, in the more distant past, SVN, CVS, RCS.

Similarly, I may not have noticed features added during git's evolution.

2) I am intrigued by analogies between version control software and OOO speculative hardware.  Git, in particular, is all about rewriting history to be "nice". OOO speculation is similarly about making execution appear to be in a serializable order.  Similarly, memory ordering hardware often performs operations in an order inconsistent with the architectural memory ordering model, but monitors and/or repairs to be consistent.

3) I am just plain interested in version control.

3') I had started writing my own DVCS that I abandoned when git came out.  Mine intended better support for partial checkins and checkouts, not just of workspaces, but of entire repo.  It was intended to be able to handle repos for the same or overlapping source trees that had been created independently - i.e. that did not have common ancestors within recorded history. (Why?  Think about it...)

Immediate Trigger

I knew that git branches are really just refs to versions - with what others might call a branch being some form of transitive closure of ancestry. Not quite the same thing, but tolerable.

Even with this, I felt strongly that git branches are not really first class.

I was flabbergasted when I learned that git branch descriptions are not transferred to remote repositories.

This suggests a definition of a requirement for a DVCS to treat something as a first class concept:  the objects representing that concept should be versionable, and pushable remotely.   

Random Note Snippings

---+ Branch Names and Versions

Several writers on DVCS, usually git advocates, have said that the problem with Mercurial style branches is that they are recorded in the commit history, and that this prevents deleting or renaming branches.

For example:
[Contreras 2011] In Mercurial, a branch is embedded in a commit; a commit done in the ‘do-test’ branch will always remain in such a branch. This means you cannot delete, or rename branches, because you would be changing the history of the commits on those branches. 
although I recall but cannot find a better statement.

(Yeah: Mercurial's obsession with immutable history tends to get in the way of clear thinking.  But HGers (huggers?) rewrite history all the time, eg via rebase. So imagine that we are talking about a hypothetical VCS that wants to keep some of the good things about CVS and HG and BZR style branches.)



So: branch names need to be deleted and renamed.  It would also be nice to be able to hide the branch names and the branch contents. But probably more important, branch names may need to be reused.  And quite likely different developers may want to have different branches that have the same name, i.e. different branches with the same name may need to be distinguished, especially if simultaneously active.

Below, I go on about naming conventions for contours (a set of file versions), and branches. E.g. names that are permanent, e.g. a contour name RELEASE-2017-06-15-03h13UDT_AFG, versus floating LATEST-RELEASE. Or a branch name, like a task branch BUGFIX-BRANCH-ISSUE#24334, versus a more longlasting branch or stream R1+BUGFIXES-MAINTENANCE-BRANCH.

Insight: whenever you are tempted to put uniqifying info like date or unique-number in a name, you are thinking about versioning.

Wait!  We are talking about version control systems!!!  VCSes are all about uniqifying different objects with the same name!   For that matter, so are hierarchical directory structures.  And so on, e.g. object labels and tags.

==> How to distinguish different branch objects with same name.

==> Encourage actions that helpm distinguish branches.

E.g. instead of saying "switch to branch BBB", where BBB is created if it does not already exists,

Prefer "create new branch BBB" which may warn you if name BBB already exists.

==> PROBLEM:  tools that might simply go "merge branchname" might now have to say "branchname is not unique - which instance of branchname do you want to merge?"  Yet another error case, but not necessarily a real error, just an ambiguity.

How do we resolve such ambiguities?

a) query

b) priority - eg. PATH for hierarchical. Choose the branch that is "closest" to the guy doing the merge.



---+ Contours? Who needs Contours?

"Contour" is my old RCS-era name for a set of file versions.  

"Whole repo" VCSes don't need contours, since any commit implies state for all files.

Except... when a project is assembled from multiple repos.  Even here, VCSes that have subrepo support usually are smart enough to include the commit or checkin or version number of all subrepos in their top level commit.

But .. subrepos don't scale all that well.  E.g. not so much for my personal library, where each directory node should be considered seperately versionable.

---+ SVN / Perforce Style branches

As I say elsewhere:
VCSes are all about uniqifying different objects with the same name!   For that matter, so are hierarchical directory structures.  
SVN and Perforce subsume the former in the latter: branches are really just trees in the hierarchical directory structure.

...Pros/cons. Workspaces assembled from multiple branches.   Where does the branch level live? "Floating"

---+ Git Branch Descriptions are not first class

[StackOverflow 2012 - git - pushing branch descriptions to remote]  The description is stored in the config file (here, the local one, within your Git repo), then, no, branch descriptions aren't pushed. Config files are not pushed (ever). See "Is it possible to clone git config from remote location?"

Simple text files are, though, as my initial answer for branch description recommended at the time.
Branch descriptions are all about helping make an helpful message for publishing. Not for copying that message over the other repos which won't have to publish the same information/commits.
 I can't criticize the guy who provided this answer, VonC, because earlier he discussed exactly this issue, proposing using text files to hold pushable branch descriptions - in exactly the same way that I have hacked branch descriptions before in other VCSes, and with exactly the same problems.

Using text files to hold branch descriptions is potentially an example of what I might call a file that wants to cross branch boundaries.  Or, a workspace that is mostly branched, but which usually contains the mainline of the branch description text file.

Sure, you may not always want that.  But it is nice to be able to do so.

---+  [StackOverflow 2009]

[StackOverflow 2009]: Git glossary defines "branch" as an active line of development. This idea is behind an implementation of branches in Git. ... The most recent commit on a branch is referred to as the tip of that branch. The tip of the branch is referenced by a branch head, which is just a symbolic name for this commit.

A single git repository can track an arbitrary number of branches, but your working tree (if you have any) is associated with just one of them (the "current" or "checked out" branch).

GLEW COMMENT: I have often wanted to create working trees which are composed of several branches. Yeah, yeah - you can simulate this by merges - but I want to make it convenient. 

E.g. say that a particular configuration = mainline of most code, but the FOO branch of some library libFoo.   Yes, this is almost equivalent to saying that the this configuration is really all the FOO branch - but it provides more information, in saying that "Yes, the configuration is FOO specific, but in general we expect only the libFoo library to be different with FOO."  

My thoughts on partial checkins and checkouts often involve this. More, partial repositories. Referencing tools and repos that have separate version ciontrol systems.  libXXX may be checked into its own repo in isolation as that repo's mainline.   But from the point of view of some other tool that uses XXX, say T,, libXXX's mainline is not T's mainline. Yet(?).  A partial checkin of libXXX amounts to creating a CANDIDATE for T's mainline.  Once the candidate is tested, it becomes T's mainline, assuming tests pass.  But if tests fail, T's version of libXXX may lag, or may fork and diverge from libXXX's mainline.

This notion of "candidate" maps well to Git's model.  Such a candidate is just a HEAD. Once tested, the candidate label may go away, and no longer clutter our listings of branches and tags and other named references.


---+ [Contreras 2011] and [Contreras 2012] 


[Contreras 2011] and [Contreras 2012] provided good comparisons of the Git and Mercurial branching mechanisms,.  But Contreras is fairly rabid about git, and makes many statements of the form "Which would anyone ever need to do it in that way?  There's a different way to do it in git. Or, you should not need to do it - I never have." That sort of statement pisses me off, even when I agree with it.

[Contreras 2011] Reacting to Google’s analysis  comparing Hg with Git, that says that History is Sacred.
This was an invalid argument from the beginning. Whether history is sacred or not depends on the project, many Git projects have such policy, and they don’t allow rebases of already published branches. You don’t need your SCM to be designed specifically to disallow your developers to do something (in fact rebases are also possible in Mercurial); this should be handled as a policy. If you really want to prevent your developers from doing this, it’s easy to do that with a Git hook. So really, Mercurial doesn’t have any advantage here.

GLEW COMMENT:


(1) I agree: it MUST be possible to change history. 


(1.1) Or at least to be able to remove some things from the history, e.g. it must be possible to remove code that you do not have a license for, that was inappropriately checked into your repo.  Or possibly code that you HAD a license for at some point in time, but for which the license expired.

I would prefer it if the code with license problems was removed, but some sort of note left behind.  Possibly an automated note, e.g. with a crypto checksum/hash and other metadata, sio that you could determine what the missing code should be if you ever again have a license.

But I can also imagine the need to hide one's tracks: to completely expunged all mention of the unlicensed code.  Trying to avoid lawsuits.

(1.2) Plus, I like the good history rewriting stuff like rebase.


(1.2') Even better if we can change our view of the history, without losing the history

BUT...  I really would prefer that rebase did not lose history.   I think that it can sometimes be useful to know that a branch started off with a different original base, and was rebased later.  If nothing else, it can explain bugs caused by the rebased code using an idiom that was otherwise eliminated between original base and the new rebase's origin.  I think of this as an original branch, and a rebase'd shadow of that original branch.

Yes, clutter:  But I think that we need to create a UI that hides such clutter, that presents only the clean history, but which remembers all the dirty details.

[Contreras 2011]   It’s all in the branches ... Say I have a colleague called Bob, and he is working on a new feature, and create a temporary branch called ‘do-test’, I want to merge his changes to my master branch, however, the branch is so simple that I would prefer it to be hidden from the history.

GLEW COMMENT:  so hide it already.   Hide = leave in the history, but don't show it by default.  As opposed to removing it from the history.

[Contreras 2011]  hg branch != git branch In Git, a branch is merely one of the many kinds of ‘refs’, and a ‘ref’ is simply a pointer to a commit. ... In Mercurial, a branch is embedded in a commit; a commit done in the ‘do-test’ branch will always remain in such a branch. This means you cannot delete, or rename branches, because you would be changing the history of the commits on those branches. You can ‘close’ branches though. As Jakub points out, these “named branches” can be better thought as “commit labels”.
GLEW COMMENT:  Key: git branches are just refs.  Specifically, the ref to the tip of what other models call a branch.  AFAICT there is not much distinguishing a git branch from other refs.  
There should be different types of ref.   E.g. a named ref, i.e. a VERSION of all files.   Some VERSIONS are intended to be fixed, immutable - e.g. "Passes-all-tests-date-YYYY-MM-DD-HH". Other VERSIONS "float" - e.g. "Passes-all-tests-LATEST".   
But such a version named ref is very different from a branch.  A branch is a set of versions, that probably have some parent-child relationship. I.e. a (contiguous) path through the DAG.
[Contreras 2011] In Mercurial, a branch is embedded in a commit; a commit done in the ‘do-test’ branch will always remain in such a branch. This means you cannot delete, or rename branches, because you would be changing the history of the commits on those branches. 
Bullshit. Obviously Mercurial has history rewriting tools, that can do things like deleting or renaming branches.
But, an important point underlies the git-centricity:  Mercurial records the branch a commit was made on in the commit metadata.  By default.  Obviously git can also do this - see [StackOverflow 2015 - add Git branch name to commit message] - but it does not do so by default.
"By default" matters.  One of Glew's Rules: First provide the capabilities. Then design the defaults. Git may provide the capabilities.  But many properties are implicit, convention, in git.  Not first class.
And, yes, branches may need to be renamed. (Although as usual I would like to be able to rename, but also remember the old name).   For gitters that have added branch names to the commit message, you could edit all the commit messages.  But if the branch name is typed metadata, standardized, it could be automatically recognized and renamed.
GLEW COMMENT:  since Git's "branches" are really just the tips of a branch, the set of versions on the branch is really the set of ancestors. Whereas Mercurial's branches, labelled in the commit history, indicate path taken for different reasons.
[Contreras 2011]  I paraphrase: "Mercurial bookmarks are like git refs (bit with no namespace support)."

One poster said that "Mercurial really wants a linear history".   But the git advocates' examples often rewrite a nonlinear history like
 
to a linear history
Seems to me like the gitters want a linear history, and delete (not hide) the non-linearities.
TBD: put an example of what I mean: messy history, and linearized "clean" view.
GLEW COMMENT: I was pissed first time I created a task branch in git, and then merged. In CVS and Mercurial (and probably others) I expected and wanted to see a node on the master saying "merged task branch".  Even if there had been no intervening changes on the master.  Instead git just pointed the master's HEAD to the task branch - i.e. the task branch lost its identity.  Better have done [StackOverflow 2015 - add Git branch name to commit message] !!! - if the task branch name was the bug number.  (Yeah, yeah, you can just add a hook.  Everything can be hooked. Yeah, yeah.  (That's an example in English of a double affirmative being a mocking negative.))

(Eventually learned about Git --no-ff, disabling "fast forwarding" on merges.)

[Contreras 2012] The fundamental difference between mercurial and git branches can be visualized in this example:
Merge example
In which branches is the commit ‘Quick fix’ contained? Is it in ‘quick-fix’, or is it both in ‘quick-fix’ and master? In mercurial it would be the former, and in git the latter. (If you ask me, it doesn’t make any sense that the ‘Quick fix’ commit is only on the ‘quick-fix’ branch)
In mercurial a commit can be only on one branch, while in git, a commit can be in many branches (you can find out with ‘git branch --contains‘). Mercurial “branches” are more like labels, or tags, which is why you can’t delete them, or rename them; they are stored forever in posterity just like the commit message.
GLEW COMMENT: Yes, this is a key difference.
We might talk about branches and sub-branches.  'Quick-fix' is a sub-branch of 'master'.
There might be branches  or paths that start off in the 'master' branch, and end up in 'some-other-branch'. Such a "crossing-branch" is not really a sub-branch at all.
In fact, a branch that is merged and then terminated is no longer a branch at all.  At least, in trees, branches usually do not start off low down in the trunk, and then merge back into the trunk.  Although this can be arranged by grafting.  Hortitorture.
I would like to have better terms.  "Streams" can diverge and recombine, but "streams" are too dynamic. "Paths" may be a better term, although paths can be bidirectional, and version control systems usually go forward in time. Paths can fork and merge.  Paths may be created out of distinct stepping stone nodes.
(Hmm: railway "tracks" might be even better than paths. Similarly bidirectional. Tracks can fork and merge. Tracks can be shunts. Sidings. Tracks have railway ties => rather like nodes.)
(Or possibly roadways. Networks of one way streets.  Side streets, dead ends, cul de sacs.  Mutiple lanes, that may be divided - rather like the parallel streams we so often see. Service roads running beside major highways. ...)

(Later: perhaps "routes", as in rock-climbing?  Rock-climbing routes are usually, mostly, one-way.  Although I like downclimbing, most people rappel down; and down-climbing is different enough that down-climbing routes are frequently not the same as up-climbing routes.)

(Or, how about ski-trails?  Again, mostly one-way, downhill in this case.)
But "branches" are the term most people use.  Even though many people have different ideas about what a branch means.
So back to talking about branches and sub-branches. 'Quick-fix' is a sub-branch of 'master'. 'Quick-fix' is one of the two paths that lead from the initial commit to the head of the master path above. The checkin "Quick fix" is on the branch(path) "quick-fix", and leads to the node "Merge branch quick-fix" on branch "master".
AFAICT git has no concept of a branch, a path, a contiguous directed linear subset of nodes, versus te set of all nodews/paths leading to a node.
Much of [Contreras 2012] amounts to confusion about these concepts.
And then piling on immutability, Mercurial's recording branch in the commit metadata.



Merge example
GLEW COMMENT: the way this graph is drawn is biased towards git's model, where the branch is designated by its youngest node.   TBD: draw with 2 or more nodes on each path.  Color the sets of nodes on each path as the branch.

[Contreras 2012] Anonymous heads are probably the most stupid idea ever; in mercurial a branch can have multiple heads. So you can’t just merge, or checkout a branch, or really do any operation that needs a single commit.
One of GLEW'S OBSERVATIONS: the most important thing is to be able to give something a name.  The next most important thing is to not be required to give it a name.
Mercurial's anonymous heads can be a pain.  Just like arithmetic zero.

[Contreras 2012] Git forces you to either merge, or rebase before you push, this ensures that nobody else would need to do that


[Contreras 2012]
I didn’t ask for a list of all the commits that are currently included in the head of the branch currently named ‘release’ that are not included in the head of the branch currently named ‘master’. I wanted to know what was the name of the branch on which the commit was made, at the time, and in the repository, where it was first introduced.
How convenient; now he doesn’t explain why he needs that information, he just says he needs it. ‘git log master..release‘ does what he said he was looking for.
Pissant arrogance, lack of of imagination. Here's an example of why you might want the branch name: some workflows put a BugFix#, Issue#, or ECO#, in the branch name.
Sure, there are other ways to do that, both in git and other VCSes.
But: it's a convention, as are, usually, those other ways.
Here's another way of thinking about compatibility between VCSes: it would be nice if procedures and concepts ported.  It would be nice if you could import from, say, Mercurial to git, and then export back to Mercurial, and get (almost) exactly the same repo.


Some articles and references

[StackOverflow 2009] := StackOverflow: Pros and Cons of Different Branching Models in DVCS

[Brad 1998] := Streamed Lines: Branching Patterns for Parallel Software Development TBD notes

[Contreras 2011] := Mercurial vs Git - It's All in the Branches.  Nice overview, although Git biased.

[Contreras 2012] := No, mercurial branches are still not better than git ones; response to jhw’s More On Mercurial vs. Git (with Graphs!)

[StackOverflow 2015 - add Git branch name to commit message]

 [Stackoverflow 2009 Jakub] :=  Git and Mercurial – Compare and Contrast - much liked by [Contreras 2011] TBD - notes

TBD: J. H. Woodyatt’s blog post 
Why I Like Mercurial More Than Git More On Mercurial vs. Git (with Graphs!)

[StackOverflow 2010 - Branch descriptions in git] - especially interesting to me because, along with mention of the then new branch description feature, VonC discusses shortcomings of that feature, and use of text files as a not-really-satisfactory but possibly better alternative.