Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Wednesday, December 02, 2015

.ignore files for version control systems

It is good to have .ignore files at the root of a project's repo. (bzr, hg, git, but not Perforce)

It is good to have global ignores - as the bzr manuals say "ignored files which are not project specific, but more user specific. Rather than add these ignores to every project, bzr supports a global ignore file ~/.bazaar/ignore

In addition to "per-user global", it is also good to have "global across an installation". Git has this, but not bzr.

How to Get a Refund For an iPhone, iPad, or Mac App From Apple

How to Get a Refund For an iPhone, iPad, or Mac App From Apple:



'via Blog this'



Many iPhone apps do not work as expected.



But getting refunds for them is more painful than on Android.



Another reason to prefer Android - if they can stop the security bleeding.


Wednesday, November 11, 2015

I hate Outlook!: the continuing whining, bitching, and moaning

I very carefully recovered circa 251 emails that were misclassified as spam.

Bug reports that I wanted to ensure were not lost, etc.

And then I accidentally deleted them. Wrong folder.

No undo.



I hate Outlook!

Friday, November 06, 2015

Stupid Blogger does not understand difference between symbolic link and data deduplication

See the quote from a UNIX bigot's blog at bottom.



Now, I am a UNIX bigot.  But I also try to be an unbiased UNIX bigot.



In particular, I understand the difference between data deduplication - a filesystem where two completely separate files that happen to have the same contents will be transparently linked, behind the user's back, to share physical storage.



Linked in such a way that, if one file is modified and the other is not, then the change is NOT propagated to the other.



I.e. linked transparently to the user.  Invisibly, except for occupying less disk space.



Symlinks are sometimes used as a "poor man's" approximation to this.  But symlinks are definitely not transparent.  E.g. if the target file is removed, the symlink dangles.  That should not happen in a proper "single instance store".



For this matter, UNIX hardlinks can be used as a "poor man's" approximation to this.   But, again, not transparent.



Now, I do not know if Microsoft's single instance store is fully transparent.  But I suspect it is - or at least more so than symlinks or hardlinks.



The Civilized Explorer Travel Bizarre Link Page: "Microsoft Innovations. This is an actual press release dated February 28, 2000, on the actual Microsoft Web site wherein Bill Bolosky and two Microsoft colleagues claim to have invented symbolic links three years ago!
... an idea occurred to them -- why not save operating system disk space by storing duplicate files as links that point to a single file housed in a central location?
Are these guys brilliant ...
During the next 1-1/2 years, Bolosky, a researcher in Microsoft Research's Systems and Networking Group, and three of his researchers worked full time with the Windows 2000 team to build the technology, now known as the Single Instance Store.
Or what?
And you thought Microsoft's only innovations came from other companies that they either bought or crushed."


Stupid UNIX bloggers unite!  You have nothing to reveal to the Internet except your ignorance, and inability to absorb new concepts!



'via Blog this'

Thursday, October 15, 2015

Google Chrome browser shows corrupted content on Mac OS X – DisplayLink Support

Google Chrome browser shows corrupted content on Mac OS X – DisplayLink Support: "This issue is due to the browser attempting to use hardware acceleration functions not currently available on virtual screens."



I got bit by this bug - disabling hardware graphics acceleration in Chrome, but not elsewhere, fixed the problem, as recommended by DisplayLink.



I wonder why the DisplayLink cannot hook the acceleration calls, and report ion error when applied to their display?



But I also wonder why Chrome does not check the capabilities of the display.  Here, the problem is probably confusion, since some of my displays have acceleration, and some do not.   Heterogeneity - gotta love it!  



Q: why does MacOS expose use of accelerators to apps?





'via Blog this'

Thursday, October 08, 2015

OSX HFS+ case insensitive by default - a security bug waiting to happen

I was having problems installing cpanm modules on my MacBook.

Turns out I have had a script ~/bin/CC in my path since circa 1980. cc plus some pleasant defaults. It has worked from UNIX v6 through v7, Eunice, BSD 4.1, 4.2, 4.3, SVr4, Xenix, Gould UTX, Linux, cygwin... and it failed for the first time on MacOS, infinite recursion. 

I wondered if HFS+'s case insensitivity could be exploited for a security hole. Googling reveals that the problem has already been encountered article.gmane.org/gmane.linux.kernel/1853266itworld.com/article/2868393/… January 2015 Although fixed in Git, this is an exploit waiting to happen, for Mac users who have ever installed software from some other UNIX-like case-sensitive system. For that matter, it is probably a potential security hole for code ported from case sensitive iOS to Mac OS X.
 

Tuesday, September 29, 2015

Merging in bzr: The git approach vs the bzr approach

Nice comparison of the git vs bzr meeging approaches.  



Git: "The changes appear to emerge fully-formed with no evidence of the process which created them."



Bzr: "the project's revision history will show the actual process the developer went through to create the branch."



comparison



Since some [D]CVSes support both styles - heck, even git and mercurial support both styles, and I think even bzr does - it may be more accurate to call it "clean history" versus "actual history" approaches.



The availability of a "rebase" tool is the main thing that enables the "clean history" approach.





Myself, I am unambiguously an "actual history" advocate.  "Those who cannot remember the past are condemned to repeat it".





But I can understand why so many Linux developers want a clean history.





Myself, I want both: the actual history, and possibly a clean history that is what you see by default, when the actual history is pretty messy.  (And believe me, I have seen the actual history get very messy.)





E.g. if you haven't rebased a task branch before merging, I want to see it in the style that the page depicts as bzr-style.  But I suppose that it is okay to "fold" it into the trunk, if there has ben no trunk activity in the meantime.   And if every checkin on the task branch is release-worthy.   But if there are any checkins on the task-brach that were half-assed, that you might not want to bisect to, then no, I don't want it folded in.



But if you have rebased a task branch, because the trunk has been modified since the task branch was originally forked, and have tested all of the intermediate points on the rebase, then I want to see BOTH in the actual history.



I want to see the original, pre-rebase, task branch.   Where the work was actually done.



But I am okay on seeing the rebased task branch, and even having it folded into the trunk,



I am okay on only presenting ONLY the rebased task branch and/or folded into the trunk, by default, HIDING the original pre-rebased task branch.  But this is by default only.  I would want to have some sort of indication that says "there's more history here".



Why?



Because I don't believe that you can completely test correctness.



Because experience shows that there is always a chance that the rebased task branch, folded into the trunk, will have a bug. A bug that occurred in the rebased task branch, but not in the original pre-rebased task branch. Essentially a bug caused by interference between whatever happened on the trunk and whatever happened on the task branch.



Even if your full test suite has been run on all of the pre- and post- rebase checkins.   Because sometimes the test suite doesn't test for everything.



Sure, oftentimes it will not matter - you bisect on the rebased and folded task checkins, and find what the bug is.   But sometimes  it is good to understand why the bug occurred, not just what it is.





Rebasing, and other history rewriting mechanisms, have two functions IMHO:



1) cleaning up the history



2) as debugging tools



E.g. if you have a task branch, which has passed te test suite at all checkins, and a trunk which has similarly passed the test suite, and you merge - and there is a failure.  But then obviously the failure is due to an interaction.



Creating a rebased clone of the task branch is then a debugging tool - you can see which of the task branch checks, which all passed tests pre-rebasing, causes tests to fail post-rebasing.



This is a damned useful thing to do even if the rebased task branch is not checked in / pushed to some more public repo - even if all you do is a branch merge.



E.g. sometimes I use a rebase just for debugging, and then through out the rebased branch.





What I do object to, however, is rebasing and NOT testing all of the intermediate checkins.













Merging in bzr: The git approach vs the bzr approach: "often"



'via Blog this'