I have bitched and moaned for quite a while about Gmail providing "searching, not sorting". True also of many Google (web)apps.
Sure, Google usually has good search.
But sorting is often the easiest way to go through a pile of stuff. Sort, and then look for, e.g., many emails from the same company that you no longer have an account with.
Anyway, I have bitched and moaned about the lack of sorting in Gmail.
And today I realized ... I can just use Thunderbird via IMAP to access by Gmail account. Thunderbird has sorting. And, in a few hours, I have been able to get rid of several thousand emails.
Actually, this is not the first time I have realized this. But when I tried it in the past Thunderbird regularly hung in annoying ways. Also, IMAP folders did not map wekll to Gmail labels. It appears more reliable now. Moreover, I am no longer trying to use Thunderbird for all of my Gmail - just for this sorting and clearing a lot of stuff out. Archiving. Deleting.
Disclaimer
The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.
See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.
See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.
Saturday, December 08, 2012
Thursday, November 15, 2012
Why Wesabe Lost to Mint - Marc Hedlund's blog
Why Wesabe Lost to Mint - Marc Hedlund's blog:
'via Blog this'
Interesting article on how one startup company (Wesabe) lost to a later entrant in the same market (Mint.com).
There are so many good memes on this post - I want to grab it all, and highlight the stuff I like.
...
There's a lot to be said for not rushing to market, and learning from the mistakes the first entrants make. ...
...
...Mint focused on making the user do almost no work at all, by automatically editing and categorizing their data...
...I was focused on trying to make the usability of editing data as easy and functional as it could be; Mint was focused on making it so you never had to do that at all. Their approach completely kicked our approach's ass. (To be defensive for just a moment, their data accuracy -- how well they automatically edited -- was really low, and anyone who looked deeply into their data at Mint, especially in the beginning, was shocked at how inaccurate it was. The point, though, is hardly anyone seems to have looked.)...
...it was far easier to have a good experience on Mint, and that good experience came far more quickly. ...
... most people simply won't care enough or get enough benefit from long-term features if a shorter-term alternative is available. ...
... Focus on what really matters: making users happy with your product as quickly as you can, and helping them as much as you can after that. If you do those better than anyone else out there you'll win. I think in this case, Mint totally won at the first (making users happy quickly), and we both totally failed at the second (actually helping people). ...
'via Blog this'
Interesting article on how one startup company (Wesabe) lost to a later entrant in the same market (Mint.com).
There are so many good memes on this post - I want to grab it all, and highlight the stuff I like.
...
There's a lot to be said for not rushing to market, and learning from the mistakes the first entrants make. ...
...
...Mint focused on making the user do almost no work at all, by automatically editing and categorizing their data...
...I was focused on trying to make the usability of editing data as easy and functional as it could be; Mint was focused on making it so you never had to do that at all. Their approach completely kicked our approach's ass. (To be defensive for just a moment, their data accuracy -- how well they automatically edited -- was really low, and anyone who looked deeply into their data at Mint, especially in the beginning, was shocked at how inaccurate it was. The point, though, is hardly anyone seems to have looked.)...
...it was far easier to have a good experience on Mint, and that good experience came far more quickly. ...
... most people simply won't care enough or get enough benefit from long-term features if a shorter-term alternative is available. ...
... Focus on what really matters: making users happy with your product as quickly as you can, and helping them as much as you can after that. If you do those better than anyone else out there you'll win. I think in this case, Mint totally won at the first (making users happy quickly), and we both totally failed at the second (actually helping people). ...
Monday, November 12, 2012
English Canadian in 1837/8?
English Canadian - Wikipedia, the free encyclopedia:
'via Blog this'
I attended a house concert by a Quebecois band yesterday.
In the lead-up to one of their songs they said, wrt the revolutions of 1837 in Lower Canada (Quebec) and Upper Canada (Ontario) "At that time the only Canadians were French Canadians".
But... this abortive revolution established an English Canadian identity, already evolving as a result of the Loyalists and the War of 1812.
The colonies were called "Upper and Lower CANADA", after all. Upper Canada was majority English and/or American.
William Lyon Mackenzie briefly established the "Republic of Canada". (Per wikipedia - they did not teach this in my schools in Quebec. :-( )
--
http://en.wikipedia.org/wiki/Canadian_units_of_the_War_of_1812
Earlier: in the War of 1812, The Frontier Light Infantry were two English speaking companies of the otherwise mainly French Canadian Voltigeurs.
Coleman's Troop were officially The Canadian Light Dragoons.
--
English Canadian identity was probably tentative in 1837, as it is tentative in many ways even in 2012.
But saying that there were no English Canadians is just the sort of myth that a nationalist movement like Quebec creates to justify itself.
'via Blog this'
I attended a house concert by a Quebecois band yesterday.
In the lead-up to one of their songs they said, wrt the revolutions of 1837 in Lower Canada (Quebec) and Upper Canada (Ontario) "At that time the only Canadians were French Canadians".
But... this abortive revolution established an English Canadian identity, already evolving as a result of the Loyalists and the War of 1812.
The colonies were called "Upper and Lower CANADA", after all. Upper Canada was majority English and/or American.
William Lyon Mackenzie briefly established the "Republic of Canada". (Per wikipedia - they did not teach this in my schools in Quebec. :-( )
--
http://en.wikipedia.org/wiki/Canadian_units_of_the_War_of_1812
Earlier: in the War of 1812, The Frontier Light Infantry were two English speaking companies of the otherwise mainly French Canadian Voltigeurs.
Coleman's Troop were officially The Canadian Light Dragoons.
--
English Canadian identity was probably tentative in 1837, as it is tentative in many ways even in 2012.
But saying that there were no English Canadians is just the sort of myth that a nationalist movement like Quebec creates to justify itself.
Example of a Wide-Open Google Drive Document found by Google Search
Bank Link List - Google Drive:
'via Blog this'
I was just googling for links wrt a Portland area bank, and found this Google Docs page.
While I see nothing sensitive in this document, it nevertheless seems odd to see it shared with the entire world. Although perhaps that was the intention.
Of more concern is the fact that I cannot easily see who "owns" the document. Not that that means much on the web, but it means something.
'via Blog this'
I was just googling for links wrt a Portland area bank, and found this Google Docs page.
While I see nothing sensitive in this document, it nevertheless seems odd to see it shared with the entire world. Although perhaps that was the intention.
Of more concern is the fact that I cannot easily see who "owns" the document. Not that that means much on the web, but it means something.
Wednesday, October 31, 2012
Deinterleaved blogs
It has long annoyed me that I have been using blogger and google+1 recently as the easiest way to take notes.
Last night I coded up bookmarklets to make it convenient to append to several different logs, directly into my several different wikis.
Quite a few, actually: 27 different flavors of log, each with several operations - edit, append new section, append section filling with URL, etc.
Of course I generated the code. I did not hand code more than 100 bookmarklets.
Actually, I don't have room for all of these bookmarklets. So I created a web page of them, and can drag them to my browser as needed.
TBD: place that web page somewhere public.
--
Leaving the stuff in wiki makes it easier, I hope, to eventually make assets, organized references, out of the raw logs.
--
Of course this really needs to be in a database. It is stupid to have so many different logs.
Last night I coded up bookmarklets to make it convenient to append to several different logs, directly into my several different wikis.
Quite a few, actually: 27 different flavors of log, each with several operations - edit, append new section, append section filling with URL, etc.
Of course I generated the code. I did not hand code more than 100 bookmarklets.
Actually, I don't have room for all of these bookmarklets. So I created a web page of them, and can drag them to my browser as needed.
TBD: place that web page somewhere public.
--
Leaving the stuff in wiki makes it easier, I hope, to eventually make assets, organized references, out of the raw logs.
--
Of course this really needs to be in a database. It is stupid to have so many different logs.
Saturday, October 27, 2012
Bookmarks need labels, just like Email, tasks, Calendar, ...
Let's see:
* I've tried using my Gmail label system as my To do list. Labels good. Flat labels bad.
* I just had the bright idea of trying to use my Google Chrome bookmarks file as the .. labels, structures, folders ... for tasks and emails and .. whatever. Whatever has an URL. Which is a lot of stuff.
Unfortunately, the bookmarks are strict hierarchy. Not labels.
---
Cor! Everything needs labels. Everything needs hierarchical nestable folders. It should be possible to label/tag/categorize everything in the same system. There should not be separate labels for Gmail that do not do hierarchy well, task lists for Gtask that do not do hierarchy at all, and which only allow one classification; hierarchical folders for Bookmarks.
It should be possible to have ONE system.
Sure, it should be possible to see
* I've tried using my Gmail label system as my To do list. Labels good. Flat labels bad.
* I just had the bright idea of trying to use my Google Chrome bookmarks file as the .. labels, structures, folders ... for tasks and emails and .. whatever. Whatever has an URL. Which is a lot of stuff.
Unfortunately, the bookmarks are strict hierarchy. Not labels.
---
Cor! Everything needs labels. Everything needs hierarchical nestable folders. It should be possible to label/tag/categorize everything in the same system. There should not be separate labels for Gmail that do not do hierarchy well, task lists for Gtask that do not do hierarchy at all, and which only allow one classification; hierarchical folders for Bookmarks.
It should be possible to have ONE system.
Sure, it should be possible to see
- Technical
- Computers
- OS
- Linux
- Bookmarked Web Pages
- Mailing lists
- To do
but also
- Bookmarks
- Technical
- Computers
- OS
- Linux
- Mailing Lists
- Linux [Technical / Computers / OS]
It should not be necessary to duplicate the entire hierarchy, if it is mainly sparse.
Hierarchy is good. But these are tags.
---
One system.
Memex.
Hardlinks vs. Softlinks
What do we learn from the history of UNIX hardlinks vs. softlinks?
First, UNIX hardlinks did not really help much for directories, because of the implicit assumption of hierarchy. ".." is the parent directory, but what if you have multiply parents? What if things pount to each other? You can force directory hardlinks given root, but it is a great way to get screwed up. I think that on some systems it was possible to create cycles that could lead to inodes being garbage, not refrrable. Memory leaks.
LEARNINGS: be careful.
LEARNING?: it may be "nice" to have a primary hierarchy, like .. for parent. But it may also be that we just don't really have the concepts to deal properly with more general graphs.
It was (is) much easier to deal with UNIX hardlinks to files.
But... one could get messed up because rm removed a link to an object, but did not actually remove the object itself.
LEARNING: there must be a way to say "remove the object completely", not just the link. Hmm... perhaps a notation like "rm ab/b" removing a link, "rm a/b/." removing the object itself. BTW, I often think of all objects as being directories Internal structure.
Big problem with hardlinks is not being able to link across filesystems. No matter how comprehensive your database is, there will always be stuff outside your database. You will need to link to this outside stuff. But... perhaps you can provide all of the goodness of labels, paths, reflexity, etc., for this outside stuff you link to as well as the inside stuff?
Symlinks helped with many of the manageability problems of hardlinks. At the cost of making things even harder to set up. (Have you ever managed link farms? ...)
With symlinks, there must be a primary name.
When you move the primary object, symlinks go dead.
(LEARNING: permalinks, etc.)
Symlinks do not really resolve the "how do you remove the object" problem. You just have to find the non-symlinked name. With hardlinks, there isn't really a non-symlinked name.
First, UNIX hardlinks did not really help much for directories, because of the implicit assumption of hierarchy. ".." is the parent directory, but what if you have multiply parents? What if things pount to each other? You can force directory hardlinks given root, but it is a great way to get screwed up. I think that on some systems it was possible to create cycles that could lead to inodes being garbage, not refrrable. Memory leaks.
LEARNINGS: be careful.
LEARNING?: it may be "nice" to have a primary hierarchy, like .. for parent. But it may also be that we just don't really have the concepts to deal properly with more general graphs.
It was (is) much easier to deal with UNIX hardlinks to files.
But... one could get messed up because rm removed a link to an object, but did not actually remove the object itself.
LEARNING: there must be a way to say "remove the object completely", not just the link. Hmm... perhaps a notation like "rm ab/b" removing a link, "rm a/b/." removing the object itself. BTW, I often think of all objects as being directories Internal structure.
Big problem with hardlinks is not being able to link across filesystems. No matter how comprehensive your database is, there will always be stuff outside your database. You will need to link to this outside stuff. But... perhaps you can provide all of the goodness of labels, paths, reflexity, etc., for this outside stuff you link to as well as the inside stuff?
Symlinks helped with many of the manageability problems of hardlinks. At the cost of making things even harder to set up. (Have you ever managed link farms? ...)
With symlinks, there must be a primary name.
When you move the primary object, symlinks go dead.
(LEARNING: permalinks, etc.)
Symlinks do not really resolve the "how do you remove the object" problem. You just have to find the non-symlinked name. With hardlinks, there isn't really a non-symlinked name.
Hierarchical labels don't cut it
Some systems have hiearchical labels.
E.g. Gmail - in some configurations Label/Sub appears to have label Label and sublabel Sub - but in reality it is just a label Label/Sub. And many tools that look at Gmail do not know about this hierarcjhy, so present the labels with / paths, flattened. Ugh.
Some tools allow labels to be applied to other labels.
E.g. Android app Folder Organizer (but not in free trial version, so Ihaven't tried).
E.g. Mediawiki categories can be applied to other categories.
This is better, but not quite there.
E.g. I may want to see a hierarchy MailingLists/Linux and Linux/MailingLists, but not all Linux stuff is related to mailing lists. Most Linux stuff will be Computers/OS/Linux, but some Computers/OS/Linux/MailingList stuff may be purely social.
Put another way: there's a large amount of stuff that should be tagged Linux. Some will be MailingList related, some not. Some will be Computers/OS related, but some not.
I see tagging label relationships, such as hierarchies and paths and reflexive relationships, as being more suggestions. When we tag an object, we may be asked which of the standard relations for that tag will apply. We can accept default sggestions, or edit at any time. After the fact, we can query using the relations stored with that object, or also with rel;atins stored with the Labels on that object that are not currently associated with that object - perhaps because you had not set them up when you tagged the object.
I see label relationships, such as hierarchies and paths and reflexive relationships, as being more related to browsing, choosing and selecting tags, and choosing and selecting objects.
E.g. Gmail - in some configurations Label/Sub appears to have label Label and sublabel Sub - but in reality it is just a label Label/Sub. And many tools that look at Gmail do not know about this hierarcjhy, so present the labels with / paths, flattened. Ugh.
Some tools allow labels to be applied to other labels.
E.g. Android app Folder Organizer (but not in free trial version, so Ihaven't tried).
E.g. Mediawiki categories can be applied to other categories.
This is better, but not quite there.
E.g. I may want to see a hierarchy MailingLists/Linux and Linux/MailingLists, but not all Linux stuff is related to mailing lists. Most Linux stuff will be Computers/OS/Linux, but some Computers/OS/Linux/MailingList stuff may be purely social.
Put another way: there's a large amount of stuff that should be tagged Linux. Some will be MailingList related, some not. Some will be Computers/OS related, but some not.
I see tagging label relationships, such as hierarchies and paths and reflexive relationships, as being more suggestions. When we tag an object, we may be asked which of the standard relations for that tag will apply. We can accept default sggestions, or edit at any time. After the fact, we can query using the relations stored with that object, or also with rel;atins stored with the Labels on that object that are not currently associated with that object - perhaps because you had not set them up when you tagged the object.
I see label relationships, such as hierarchies and paths and reflexive relationships, as being more related to browsing, choosing and selecting tags, and choosing and selecting objects.
Tag items not just with labels, but label paths
Gmail Adds Nested Labels and Message Preview:
'via Blog this'
Brain flash: I was, for the umpteenth time, looking for better mail reading and organization tools, when I re-read this article on Gmail nested labels from 2010.
Which suck.
It sucks that:
Nested Labels is just a cosmetic change that lets you create labels which are displayed hierarchically. If you enable this experiment and create a label like Mailing-Lists/Linux, you'll notice that Linux is displayed as a subfolder of Mailing-Lists. Unfortunately, all the other places that let you interact with labels show the label as Mailing-Lists/Linux... this poor behavior is the default behavior on so many systems that use labels.
My brain flash: messages, items, tasks need to be tagged not just with a set of labels, but with a set of label-paths. I.e. label-paths may want to be a top level object.
A label path is a sequence of labels that might otherwise be considered top level independent, but which we are imposing some structure on.
E.g.
- Mailing Lists
- Linux
- Android
- Agile
- Technical
- Operating Systems
- UNIX-like
- Linux
- Kernel
- Distributions
Mailing Lists- Android
Mailing Lists- iOS
- Solaris
- Windows
- Software Engineering
- Agile
Mailing Lists
I have often talked about stuff like this - the issue of which hierarchy: Mailing List/Linux versus Linux/Mailing List. Labels get that right.
But I have only recently started thinking of label paths as something that get applied to individual mssages. I have hitherto thought f label paths or hierarchies as being properties of the labels themselves.
Perhaps it should be thought of as browsing similarly labelled objects: A/B/C/D is equivalent to A&B&C&D, or even the reordered D&C&A&B. But we can apply the operator "up" to a label-path A/B/C/D to get A/B/C, whereas "up" is not meaningful for commutative A&B&C&D.
Similarly, A/B/C/D/* is equivalent to saying D&A&C&B and ... either any other label or
---
I've been thinking mainly in terms of co-labelling suggestions: applying a label like "Linux" automatically suggests the path Technical/Computers/OS/UNIX-like/Linux.
It is a suggestion, because not all things labelled "Linux" may want to be labelled with the full path. E.g. I might have something with a label-path Mailing Lists/Linux/Meetup, but I may not want a meetup social invite to appear in my technical hierarchy. (Not by default - but I may want to broaden, sibce sometimes a social invite turns into a place for technical notes get recordered).
---
Is it meaningful to attach a label-path to an item, rather than to the label itself? I think that I just answered that question: yes. A social invite may want to be Mailing Lists/Linux/Meetup but not Technical/Computers/OS/UNIX-like/Linux.
But I also want to associate label paths with labels. So I can just say "Linux", and have the default label paths suggested.
The default label paths may be suggested not just by label, but also by context. For example, in my to-do list organizer, default paths ToDo/.../Linux may be suggested. If I am reading an email...
---
Again, think of it as browsing. Label path hierarchy like a folder tree browser.
Looking at A/B/C may start off by looking at all items labelled explicitly with the label path A/B/C.
But I may click to widen to A&B&C.
And similarly click to widen to all labelled C.
I think of this latter operation as something that takes me from viewing A/B/C to C.
I think of the folder browser as having (a) the current path tio what you are looking at, and (b) alternate paths that get to the same place. To give you more ways to go up. And sideways.
---
Everything I have said about applying label paths is also applicable to applying label DAGs or more general graphs. Paths are just easy to type. But I can easily imagine wanting to apply one label and automatically have it appear by default in several places in the "path" oriented browsing hierarchy.
Thursday, October 25, 2012
Multiple paths of history
I have blogged, whined, ranted and complained about systems that allow you to, nay, encourage you to, edit history. But I also whine about systems that do not allow you to do something as trivial as fix a typo in a checkin message, let alone systems like Mercurial that do not really allow you to add properties like "passes the days-long QA test suite" to a version of the code in the repository.
Moreover, I really do understand why you want to rearrange history. My classoc example is of interleaved changes
V0---(a1)--->Va1---(b1)--->Va1b1---(a2)--->Va2b1---(b2)--->Va2b2 = Vab
I.e. starting off at version V0, apply changeset (a1) to get version Va1, then changeset (b1) to get version Va1b1, and so on until you get final version Va2b2, which we will call simply version Vab.
I.e. there are really two independent changes going on, (a)=(a1)+(a2) and (b)=(b1)+(b2). But they got interleaved.
Try as you might, this happens.
Using history rewriting tools like git rebase and hg graft you can rearrange the histories.
V0---(a1)--->Va1--->(a2')--->Va --->(b1')--->Vab1'---(b2')--->Vab
or in the other order
V0---(b1")--->Vb1"--->(b2")--->Vb --->(a1")--->Va1"b---(a2")--->Vab
But this is problematic when others have pulled and cloned and made modifications to the intermediate pre-history edited versions like Va2b1. In Mercurial, if that version is deleted from the repository, you may not be able to merge back in - or, when you merge, it may revive the deleted version from the dead.
This is bad.
Moreover, this "you must edit history before pushing" philosophy leads to people delaying sharing code. Which is bad.
---
I have been thinking more and more about recording multiple paths of history in the repo.
I.e. pushing first
V0---(a1)--->Va1--->(a2')--->Va --->(b1')--->Vab1'---(b2')--->Vab
Moreover, I really do understand why you want to rearrange history. My classoc example is of interleaved changes
V0---(a1)--->Va1---(b1)--->Va1b1---(a2)--->Va2b1---(b2)--->Va2b2 = Vab
I.e. starting off at version V0, apply changeset (a1) to get version Va1, then changeset (b1) to get version Va1b1, and so on until you get final version Va2b2, which we will call simply version Vab.
I.e. there are really two independent changes going on, (a)=(a1)+(a2) and (b)=(b1)+(b2). But they got interleaved.
Try as you might, this happens.
Using history rewriting tools like git rebase and hg graft you can rearrange the histories.
V0---(a1)--->Va1--->(a2')--->Va --->(b1')--->Vab1'---(b2')--->Vab
or in the other order
V0---(b1")--->Vb1"--->(b2")--->Vb --->(a1")--->Va1"b---(a2")--->Vab
Note that I have tried to indicated by priming, ' and ", that the changesets applied in different order may not be identical. Note: we are not talking about applying the SAME changesets in a different order, which might give different final results Vab, but instead calculating MODIFIED changesets (a1"), (b1'), etc. to give the same final result Vab.
Compactly, letting (a)=(a1)+(a2) and (b)=(b1)+(b2)
V0---(a)--->Va--->(b)--->Vab
or in the other order
V0---(b)--->Vb--->(a)--->Vab
I know how to do this with standard history editing tools.
or in the other order
V0---(b)--->Vb--->(a)--->Vab
I know how to do this with standard history editing tools.
---
But this is problematic when others have pulled and cloned and made modifications to the intermediate pre-history edited versions like Va2b1. In Mercurial, if that version is deleted from the repository, you may not be able to merge back in - or, when you merge, it may revive the deleted version from the dead.
This is bad.
Moreover, this "you must edit history before pushing" philosophy leads to people delaying sharing code. Which is bad.
---
I have been thinking more and more about recording multiple paths of history in the repo.
I.e. pushing first
V0---(a1)--->Va1--->(a2')--->Va --->(b1')--->Vab1'---(b2')--->Vab
and then later, at your leisure, going and rearranging the history so that it makes sense:
V0---+---(a1)--->Va1--->(a2')--->Va --->(b1')--->Vab1'---(b2')--->Vab
\ ||
+-----------(a)---------->Va--------->(b)--------------------->Vab
\ ||
+-----------(a)---------->Va--------->(b)--------------------->Vab
I.e. the final nodes Vab are equivalent.
Or, pushing my ascii art skills
V0---+---(a1)--->Va1--->(a2')--->Va --->(b1')--->Vab1'---(b2')---+--->Vab
\ /
+-----------(a)---------->Va--------->(b)-------------------------+
\ /
+-----------(a)---------->Va--------->(b)-------------------------+
(I think I like the equivalence || drawing better.)
Still later you may create a second alternate history:
V0---+---(a1)--->Va1--->(a2')--->Va --->(b1')--->Vab1'---(b2')--->Vab
\ ||
+-----------(a)---------->Va--------->(b)--------------------->Vab
\ ||
+-----------(a)---------->Va--------->(b)--------------------->Vab
\ ||
+-----------(b')---------->Vb--------->(a')------------------>Vab
+-----------(b')---------->Vb--------->(a')------------------>Vab
The basic idea is that when you establish that you are creating alternate history paths, the operations are constrained to create the same final results.
And then, for those who want a simple history, you hide the paths, the changesets and the versions, that you do not want to appear by default. But they still are recorded in the history, so that if somebody made modifications based off an intermediate version, you can still merge meaningfully. You get to decide if that merge appears as if based of a version that is currently hidden, or if you want to roll up the visibility of its ancestors.
Wednesday, October 24, 2012
How to capture a screen image with transient flyovers in VNC
I usually work in VNC, on a UNIX box, typically from a Windows PC latop.
I occasionally want to capture screen clippings of GUI applications running in VNC. Problem: often they are transient - any key press, etc., causes what I want to capture to disappear.
Usually on Windows my scren capture tools (Windows 7's snipping tool, or SnagIt) have a keyboard or mouse shortcut that takes priority. However, apparently VNC managed to install itself underneath, so all keyboard and input events get sent to VNC and Linux and the app.
So here's how to capture such a transient GUI popup or the like in VNC:
VNC has its own shortcut. On my machine, F8.
Press F8. Go to the VNC options menu. Unselect "send keyboard events to server" and "send mouse events to server". Now enter the screen capture shortcut. Undo so that VNC works again.
Not nice, but I can use it. Will have to see whether AutoHotKey is underneath all of these so can shortcut.
I occasionally want to capture screen clippings of GUI applications running in VNC. Problem: often they are transient - any key press, etc., causes what I want to capture to disappear.
Usually on Windows my scren capture tools (Windows 7's snipping tool, or SnagIt) have a keyboard or mouse shortcut that takes priority. However, apparently VNC managed to install itself underneath, so all keyboard and input events get sent to VNC and Linux and the app.
So here's how to capture such a transient GUI popup or the like in VNC:
VNC has its own shortcut. On my machine, F8.
Press F8. Go to the VNC options menu. Unselect "send keyboard events to server" and "send mouse events to server". Now enter the screen capture shortcut. Undo so that VNC works again.
Not nice, but I can use it. Will have to see whether AutoHotKey is underneath all of these so can shortcut.
GMake order dependent inconsistency
I dislike how GMake is inconsistent about ordering:
You can define rules out-of-order
top-rule: sub-rule1 sub-rule2
@echo top-rule
sub-rule1:
@echo sub-rule1
sub-rule2:
@echo sub-rule2
You can define rules out-of-order
top-rule: sub-rule1 sub-rule2
@echo top-rule
sub-rule1:
@echo sub-rule1
sub-rule2:
@echo sub-rule2
but if you want to do something like collecting the sunb rules in a variable this breaks
top-rule: $(SUBRULES)
@echo top-rule
SUBRULES+= sub-rule1
sub-rule1:
@echo sub-rule1
SUBRULES+= sub-rule1
sub-rule2:
@echo sub-rule2
@echo top-rule
SUBRULES+= sub-rule1
sub-rule1:
@echo sub-rule1
SUBRULES+= sub-rule1
sub-rule2:
@echo sub-rule2
because the variable is expanded when encountered
It must be fixed by rearranging
SUBRULES+= sub-rule1
sub-rule1:
@echo sub-rule1
SUBRULES+= sub-rule1
sub-rule2:
@echo sub-rule2
sub-rule1:
@echo sub-rule1
SUBRULES+= sub-rule1
sub-rule2:
@echo sub-rule2
top-rule: $(SUBRULES)
@echo top-rule
@echo top-rule
Darn! But I like being able write things out of order, top-down.
In general, I like languages that have relaxed order dependencies. Like some RTL languages (notably Intel iHDL). Even C++ has relaxed ordering in some places.
But the inconsistencies such as above are painful and confusing.
--
Single assignment is the easiest way to do non-order dependent.
But accumulation - += - is very much required.
Q: is the accumulation done order dependent, or not?
What is needed is accumulating += - probably order dependent.
And then expanding.
With an error if expanding results in changes to variables already being expanded. ??
Or relaxation.
Tuesday, October 16, 2012
New queue... - GQueues
New queue... - GQueues:
'via Blog this'
Anpther organizer, spoken well of.
But yet another tool that does not understand hierarchy.
You can have categories: Home and Work are default. I added Personal.
You have queues within categories.
But you don't have categories within categories, queues within queues.
Nowhere to record comments as t what a category or queue is used for.
Tags are a flat space.
Sigh
'via Blog this'
Anpther organizer, spoken well of.
But yet another tool that does not understand hierarchy.
You can have categories: Home and Work are default. I added Personal.
You have queues within categories.
But you don't have categories within categories, queues within queues.
Nowhere to record comments as t what a category or queue is used for.
Tags are a flat space.
Sigh
Monday, October 15, 2012
Notify at end
I've already talked about this, but again:
My wife is dropping our daughter off at MNusic class, and I am picking her up a while later.
Compound Event:
Start Driving W: 3pm, alarms -5 minutes, for my wife and daughter
Interval: driving, blocked out for my wife and daughter
Event Start: 3:30pm
Interval: blocked out for my daughter, but not for my wife nor me
Event End: 5:30pm
Start Driving H: 5pm, me. Alarm -5 minutes, for me.
Interval: Start Driving H to Event End: blocked out for me
...etc...
My wife is dropping our daughter off at MNusic class, and I am picking her up a while later.
Compound Event:
Start Driving W: 3pm, alarms -5 minutes, for my wife and daughter
Interval: driving, blocked out for my wife and daughter
Event Start: 3:30pm
Interval: blocked out for my daughter, but not for my wife nor me
Event End: 5:30pm
Start Driving H: 5pm, me. Alarm -5 minutes, for me.
Interval: Start Driving H to Event End: blocked out for me
...etc...
Meet Person at 7:30
A pattern in my usage of calendaring programs: I will often create meetings, events, appointments of the form "Pick up D at 7:30 after Music class"
I.e. I will put the actual target time in the title.
Reason: I will set the actual start time earlier, e.g. at 7pm, to give me time to drive to the music class to pick her up.
And I will have alarms, notifications, set at the usual countdown - 10 minutes, 5 minutes - before.
Q: Why not just set the event for the pickup time, and set an alarm?
A: I need the time to be blocked out, so that no meetings will be set up conflicting.
FLYT Magazine file - IKEA
FLYT Magazine file - IKEA:
'via Blog this'
Most cost effective magazine files I have found, and 40 cents apiece.
Better than Uline, at 1.50$ each.
'via Blog this'
Most cost effective magazine files I have found, and 40 cents apiece.
Better than Uline, at 1.50$ each.
Utility Concepts ...
All organizer facets need security and access control.
Organizer facets = notes, tasks, to do, checklist, calendar ...
All need hierarchy.
All need grouping. Folders. Etc. Labels. Tags.
Heck - just think of them all as objects in a filesystem. Perhaps we want to type the drectoriew, the folders, as MS has done with WinFS - so that a list of tasks = a folder, a directory, that contains nothing but tasks. So you can't put something else in there by accident. But note that you want to be able to add notes, etc., everywhere - i.e. you want to be able to attach notes and calendar items to the folder/directry that cntains a list of tasks, etc.
All need the same edit tools: drag and drop objects to folders/labels, and vice versa. Multiple selection. Etc.
All need the same capabilties. It's just a question of providing convenient defaults. Convenient views. E.g. its most convenient to display calendar items in a grid. But you probably don't want that to be the default way of seeing To-Dos or Notes./ (Althoygh it would be nice to be able to do so.)
--
The biggest differences between filesystems/folders/directores and objects like text notes/calendar items/to-dos are granularity, orderedness, and embeddedness.
Files tend to be relatively coarse grain. (Although the Reiser FS can efficiently support 0 an 1 charcater files.)
Files in a directory/folder do not necessarily have an order. (Although there may be one.)
Once you transition from a list of files in a directory to a text file of notes, you have really transitioned. You are no longer in a file browser anymore. You're in a ext file reader or editor, or an HTML browser. You can transition back, but you have gone through a pahse change.
... For a long time I have been thinking about representing filesystems as XML. Obviously, XML can represent any filesystem. Moreover, XML can also represent the internal structure of the files. XML has the orderedness and embeddedness that a filesystem lacks.
We can imagine an XML browser that can transition from browsing filesystems to browsing objects. Or a family of browsers, some specialized for the filesystem subset of XML, some for the text file, calendar file, etc. But all having the same basic capabilities.
The filesystem may just serve XML up to the browser. Making it essentially transparent whether what is being served is a directory listing, or file contents.
...This is like JSON versus XML. Filesystems are like JSON: hierarchical, but not really ordered. File contents often have more structure than JSON. Orderedness and embeddedness are two standard sch aspects of additional structure beyond JSON.
Organizer facets = notes, tasks, to do, checklist, calendar ...
All need hierarchy.
All need grouping. Folders. Etc. Labels. Tags.
Heck - just think of them all as objects in a filesystem. Perhaps we want to type the drectoriew, the folders, as MS has done with WinFS - so that a list of tasks = a folder, a directory, that contains nothing but tasks. So you can't put something else in there by accident. But note that you want to be able to add notes, etc., everywhere - i.e. you want to be able to attach notes and calendar items to the folder/directry that cntains a list of tasks, etc.
All need the same edit tools: drag and drop objects to folders/labels, and vice versa. Multiple selection. Etc.
All need the same capabilties. It's just a question of providing convenient defaults. Convenient views. E.g. its most convenient to display calendar items in a grid. But you probably don't want that to be the default way of seeing To-Dos or Notes./ (Althoygh it would be nice to be able to do so.)
--
The biggest differences between filesystems/folders/directores and objects like text notes/calendar items/to-dos are granularity, orderedness, and embeddedness.
Files tend to be relatively coarse grain. (Although the Reiser FS can efficiently support 0 an 1 charcater files.)
Files in a directory/folder do not necessarily have an order. (Although there may be one.)
Once you transition from a list of files in a directory to a text file of notes, you have really transitioned. You are no longer in a file browser anymore. You're in a ext file reader or editor, or an HTML browser. You can transition back, but you have gone through a pahse change.
... For a long time I have been thinking about representing filesystems as XML. Obviously, XML can represent any filesystem. Moreover, XML can also represent the internal structure of the files. XML has the orderedness and embeddedness that a filesystem lacks.
We can imagine an XML browser that can transition from browsing filesystems to browsing objects. Or a family of browsers, some specialized for the filesystem subset of XML, some for the text file, calendar file, etc. But all having the same basic capabilities.
The filesystem may just serve XML up to the browser. Making it essentially transparent whether what is being served is a directory listing, or file contents.
...This is like JSON versus XML. Filesystems are like JSON: hierarchical, but not really ordered. File contents often have more structure than JSON. Orderedness and embeddedness are two standard sch aspects of additional structure beyond JSON.
Task dependencies
I know that Kent Beck says that we should not need to record task dependencies in our Agile Scheduling.
But, for my personal tasks, it would be nice to have task dependencies taken into account.
E.g. perhaps hiding or graying out tasks that cannot be done yet because of other tasks they depend on.
(In many ways, a task dependency is like a "start by" date. As in "this task cannot be started until such and such a date. Whereas other tasks can be dome at any time, but you just give them a "tickle date" to remind you to pay attention. I may want to hide the former, although I think that I opreger graying.)
As in, propagating due dates: if Task B depends on Task A, and B has a due date Tb, then A must be done by that date - and possibly earlier, e.g. if you provide a work estimate. (Hmm, work estimates could be "number of days", "number of weekend days", etc.)
But, for my personal tasks, it would be nice to have task dependencies taken into account.
E.g. perhaps hiding or graying out tasks that cannot be done yet because of other tasks they depend on.
(In many ways, a task dependency is like a "start by" date. As in "this task cannot be started until such and such a date. Whereas other tasks can be dome at any time, but you just give them a "tickle date" to remind you to pay attention. I may want to hide the former, although I think that I opreger graying.)
As in, propagating due dates: if Task B depends on Task A, and B has a due date Tb, then A must be done by that date - and possibly earlier, e.g. if you provide a work estimate. (Hmm, work estimates could be "number of days", "number of weekend days", etc.)
Sunday, October 14, 2012
Google Tasks
I want to like Google tasks, really I do.
But, try as I might, I can't bring myself to like them. Or use them regularly. They are just so limited wrt other task management systems.
Here, I will take notes. Although I may just as often start a new blog tem.
But, try as I might, I can't bring myself to like them. Or use them regularly. They are just so limited wrt other task management systems.
Here, I will take notes. Although I may just as often start a new blog tem.
Tasks due Dates, Times,
Google Tasks have due dates.
Sometimes tasks also need a due time. E.g. it is pointless to call a business before it is open. Or, think about how often people batch tasks to be done before work, at lunch time, or after work.
And think about how many tasks can only be done during business hours, not on the weekends or after work. (These tasks are my particular bugbear.)
Digital Pen – Pegasus
Digital Pen – Pegasus:
'via Blog this'
http://www.ehow.com/list_7214562_pens-create-digital-notes.html
Read more: Pens That Create Digital Notes | eHow.com http://www.ehow.com/list_7214562_pens-create-digital-notes.html#ixzz29K21vDLE
'via Blog this'
http://www.ehow.com/list_7214562_pens-create-digital-notes.html
Tablet NoteTaker
- Pegasus Technologies has developed a digital pen called "Tablet NoteTaker." The NoteTaker is a cordless digital pen that writes with ink on plain paper, while creating a digital ink copy of the notes. It connects to computers or other devices such as mobile phones via Bluetooth or USB technology. Microsoft Vista, Windows 7 and Office 2007 utilities support the NoteTaker as both a digital pen and a mouse. Via Microsoft's handwriting recognition, notes can be transferred to typed text. The Notetaker is also compatible with Mac OS X systems.
Read more: Pens That Create Digital Notes | eHow.com http://www.ehow.com/list_7214562_pens-create-digital-notes.html#ixzz29K21vDLE
Saturday, October 13, 2012
Digital Pen for 3x5 index cards
I'd be relatively happy if the digital pen that I would like to buy would just email me the image.
Sure, I'd like to interface to good software to manage such images. But I doubt that there is any good software.
AdapX's connection to Microsoft OneNote would be nice. I like many aspects of OneNote. Except that it is so much NOT what I am using these days.
Hmm.... if OneNote was in the cloud, and on a Windows phone ... that might be enugh reason to buy a Windows phone.
Although talk about data liberation (NOT).
---
Digital pen to Evernote might almost be okay.
But digital pen to email might be good enough.
---
Wish that the digital pen could also take pictures.
Sure, I'd like to interface to good software to manage such images. But I doubt that there is any good software.
AdapX's connection to Microsoft OneNote would be nice. I like many aspects of OneNote. Except that it is so much NOT what I am using these days.
Hmm.... if OneNote was in the cloud, and on a Windows phone ... that might be enugh reason to buy a Windows phone.
Although talk about data liberation (NOT).
---
Digital pen to Evernote might almost be okay.
But digital pen to email might be good enough.
---
Wish that the digital pen could also take pictures.
Printing | Capturx
Printing | Capturx: "An initial Capturx purchase includes a large amount of unique digital dot pattern: enough to print more than 6000 8-1/2-x-11 pages or more than 2000 E-sized prints. Dot pattern is like printer toner: it gets used up as you print. Additional dot pattern is available for purchase and download from Adapx."
'via Blog this'
Oh, this is cool (in a cruel sort of way): just because AdapX/CapturX can print its dot pattern on regular paper doesn't mean you can buy the pen and use it forever without buying more stuff from AdapX.
Digital razor blades: you have to go back to the source and buy more. And from their point of view, they don't even have to sell you anything physical: they just sell you the dot patterns.
Cool. And annoying.
I wonder if the dots are a Penrose tiling, or if they just have a numeric unique ID encoded in the dot patterns. (Plus probably a copyright watermark, eh?)
Hmm.... is there a way that you can look at a Penrose tiling printed on a piece of paper, and easily compute your coordinates, where in a sequence of such pages, you are?
(Modulo paper size and resolution, it will repeat. But not for a long time, depending on the resolution.)
'via Blog this'
Oh, this is cool (in a cruel sort of way): just because AdapX/CapturX can print its dot pattern on regular paper doesn't mean you can buy the pen and use it forever without buying more stuff from AdapX.
Digital razor blades: you have to go back to the source and buy more. And from their point of view, they don't even have to sell you anything physical: they just sell you the dot patterns.
Cool. And annoying.
I wonder if the dots are a Penrose tiling, or if they just have a numeric unique ID encoded in the dot patterns. (Plus probably a copyright watermark, eh?)
Hmm.... is there a way that you can look at a Penrose tiling printed on a piece of paper, and easily compute your coordinates, where in a sequence of such pages, you are?
(Modulo paper size and resolution, it will repeat. But not for a long time, depending on the resolution.)
How to Print on Index Cards | eHow.com
How to Print on Index Cards | eHow.com:
'via Blog this'
Using blogger for a totally inappropriate purpose - capturing notes on web pages.
But only because I don't have takeout for Google+, my normal web page annotater.
'via Blog this'
Using blogger for a totally inappropriate purpose - capturing notes on web pages.
But only because I don't have takeout for Google+, my normal web page annotater.
How Do I Print 3 X 5 Index Cards? | eHow.com
How Do I Print 3 X 5 Index Cards? | eHow.com: "Print "
'via Blog this'
Ah, good. It appears that standard printers can grab and print on 3"x5" index cards.
I'll try this when I get home.
This makes the AdapX pen more attractive: buy cheap index cards, print a bunch of them. And then record.
I doubt any printer I have can auto-feed 3"x5" cards, however.
'via Blog this'
Ah, good. It appears that standard printers can grab and print on 3"x5" index cards.
I'll try this when I get home.
This makes the AdapX pen more attractive: buy cheap index cards, print a bunch of them. And then record.
I doubt any printer I have can auto-feed 3"x5" cards, however.
Checklists may need to be doubled
A checklist - e.g. a checklist for buying groceries - may need to be doubled.
Which is my attempt to say in a cute way that there are two separate checklist stages. Which could be imagined as a checklist with two columns:
Put another way: when you are building a checklist to go grocery shopping, first, while at the fridge or pantry, you may be selecting which items from your regular list of groceries you need to restock. I.e. select which items from your checklist template will go into the shopping list you want to buy today. While at the store you may be checking things off from that list.
Put this way, it sounds like two separate checkoffs, the first from the template producing the checklist for the second.
But, it might be nice to have both the in-stock and purchased columns in view while at the store. Say if there is a great sale on yogurt - you may already have some, but not so much that you would not mind buying more. (See my earlier rants about how it should be possible to attach notes to checklist items, as I have done above.) Therefore, the two column format above.
Put still another way: checklist items may have states that are the pair (in-stock,purchased). Some views may make the pair visible, some not.
--- This isn't just about grocery lists. Think required/verified, e.g. for a code review, where the reviewers have the option of making some check off items non-required.
Which is my attempt to say in a cute way that there are two separate checklist stages. Which could be imagined as a checklist with two columns:
item | in inventory | purchased |
---|---|---|
honey | Ο | Ο |
yogurt | ✓ (half full) | - |
blueberries | ✗ | O |
tomatoes | Ο | Ο |
bread | Ο | Ο |
Put another way: when you are building a checklist to go grocery shopping, first, while at the fridge or pantry, you may be selecting which items from your regular list of groceries you need to restock. I.e. select which items from your checklist template will go into the shopping list you want to buy today. While at the store you may be checking things off from that list.
Put this way, it sounds like two separate checkoffs, the first from the template producing the checklist for the second.
But, it might be nice to have both the in-stock and purchased columns in view while at the store. Say if there is a great sale on yogurt - you may already have some, but not so much that you would not mind buying more. (See my earlier rants about how it should be possible to attach notes to checklist items, as I have done above.) Therefore, the two column format above.
Put still another way: checklist items may have states that are the pair (in-stock,purchased). Some views may make the pair visible, some not.
--- This isn't just about grocery lists. Think required/verified, e.g. for a code review, where the reviewers have the option of making some check off items non-required.
Avery Laser Index Cards 3 x 5 Box Of 150 by Office Depot
Avery Laser Index Cards 3 x 5 Box Of 150 by Office Depot:
'via Blog this'
Well, "laser index cards" are pretty expensive - 7.5 cents apiece.
Hmmm...... if they were wipe off, they could be reused.
'via Blog this'
Well, "laser index cards" are pretty expensive - 7.5 cents apiece.
Hmmm...... if they were wipe off, they could be reused.
Printing | Capturx
Printing | Capturx:
'via Blog this'
AdapX / CapturX at least alllws you to print your own special paper.
Now, can I find a printer that will print index cards? Business cards I suppose, although business card stock is pretty expensive.
'via Blog this'
AdapX / CapturX at least alllws you to print your own special paper.
Now, can I find a printer that will print index cards? Business cards I suppose, although business card stock is pretty expensive.
Livescribe :: Store :: Livescribe Sticky Notes
Livescribe :: Store :: Livescribe Sticky Notes:
'via Blog this
Livescribe doesn't seem to have index cards, but does at least have sticky notes.
Interesting that the notes seem to have an ID in the dot pattern, and that is used to distinguish items that you add to after going somewhere else.
'via Blog this
Livescribe doesn't seem to have index cards, but does at least have sticky notes.
Interesting that the notes seem to have an ID in the dot pattern, and that is used to distinguish items that you add to after going somewhere else.
Livescribe :: Store
There seem to be two classes of digital pens:
(1) those that write on special paper
(2) those that have a separate device that senses the position of the pen.
I am aware of academic and research digital pens that are entirely self contained. Some motion sensing. Some video based - like an optical mouse, they detect motion on nearly any surface that has texture. (Even when lifted off....)
But I am not aware of any commercial product where the digital pen is entirely self sufficient.
I want such a product.
...
But... oh shoot, special paper is the marketeer's dream. Razor blades. Printer ink. Sell the pen, and then keep selling the supplies. Patent protected, I'm sure.
(1) those that write on special paper
(2) those that have a separate device that senses the position of the pen.
I am aware of academic and research digital pens that are entirely self contained. Some motion sensing. Some video based - like an optical mouse, they detect motion on nearly any surface that has texture. (Even when lifted off....)
But I am not aware of any commercial product where the digital pen is entirely self sufficient.
I want such a product.
...
But... oh shoot, special paper is the marketeer's dream. Razor blades. Printer ink. Sell the pen, and then keep selling the supplies. Patent protected, I'm sure.
Recording Pens
I want a pen that can record what I am writing on an index card or slip of paper, and upload it to my PIM.
Because my cellphone / PDA doesn't always have a charge.
But I would prefer not to have to use that big acoustic sensor. Want a freestanding pen.
Of course, I would prefer not to have to carry a special pen. I could scan after the fact But if a special pen kept charged long enough, it would save a step.
Because my cellphone / PDA doesn't always have a charge.
But I would prefer not to have to use that big acoustic sensor. Want a freestanding pen.
Of course, I would prefer not to have to carry a special pen. I could scan after the fact But if a special pen kept charged long enough, it would save a step.
User Interface Designers neglect Nesting and Linking
A common problem of user interface designers is that they neglect hierarchy and linking.
E.g. for note-taking programs: sometimes you want a note within a note, a note attached to a note.
E.g. for calendar programs:
E.g. for note-taking programs: sometimes you want a note within a note, a note attached to a note.
E.g. for calendar programs:
- notes on calendar items
- links between calendar items
- to-do lists and checklists attached to calendar items
For to-do lists and checklists:
- notes on the top level lists and checklists
- notes on individual items
- links to calendar items
- checklists nested within checklists
- checklists nested within each other and linked to each other
- e.g. when I go travelling on business I take all of the electronics I lug around every day, ++
For reminders: (which I am only recently realizing are distinct objects in their own right)...
You get the picture: all of the basic data types of my imaginary PIM (Personal Information Manager)
- text notes
- drawings and bitmaps (e.g. screen captures, although I still like vector grapics)
- time scheduling and tracking
- recording: diaries, logs, and journals
- scheduling and planning: calendars and schedules
- hey, writing this made me realize that schedules and to-do lists and checklists are related
- e.g. might want to reuse a schedule/plan for arranging a trip or a meeting - I guess that is like a template for a trip on a travel website - and when instantiating such a template drop items onto your calendar/schedule relative to the target date (although likelihood is that you will tweak the dates
... and I am sure that I have forgotten some ...
All need the ability to be embedded in other objects, and to have other objects embedded in them.
(I want to say "of course there will be leaf types" ... but do there need to be? Can we come up with a rendering scheme that is completely self recursive, without having to introduce leaf types (like TextNote, versus TextNoteThatCanHaveOtherStuffEmbeddedWithinIt). Certainly a data representation like XML, without DTDs, can be completely self recursive[*])
([*]: actually, it seems, based on tests, that DTDs permit recursion. Which is okay by me: I like being able to have DTDs, as long as they can be optional.)
Access control and notifications are similar
Access control: if you know the name, the path to an object, are you allowed to access it.
Notification: the object, a cop or a link, is pushed to people via some communication system, saying "Look at this now".
These are dual.
I want much, almost all, of what I write to be public. Like this blog. It seldom hurts, and once in a while somebody notices, finds it in search, and replies and helps me out.
I.e. basically I am thinking out loud. Talking to myself in public. People walk to the other side of the Internet when they see me coming.
But I don't want to push most of what I want to do to people. Once in a while I will - once in a while I will share something via Google+, or email it, or copy it to my wiki, or copy it to USEnet newsgroups like comp.arch.
But, these are dual. In an ideal world almost the same concepts should control access control and notifications.
Certain posts I may want to make publicly accessible, but not push to my Google+ stream. Or push to a narrow list of friends.
Confusingly, when I post to Blogger a Google+ window pops up and asks me if I want to Share via Google+. But this notion of "share" is notification. It is not to be confused with access control. AFAIK Blogger has no access control.
---
It's like preschool teachers, or the leaders of the psychobabble training sessions that Intel made us take: "Do you have anything you want to share with the group?"
Maybe... I'm willing to passively share lots of my thoughts. Give you access if you ask. But I am reluctant to push all of my thoughts to others, to actively share.
Notification: the object, a cop or a link, is pushed to people via some communication system, saying "Look at this now".
These are dual.
I want much, almost all, of what I write to be public. Like this blog. It seldom hurts, and once in a while somebody notices, finds it in search, and replies and helps me out.
I.e. basically I am thinking out loud. Talking to myself in public. People walk to the other side of the Internet when they see me coming.
But I don't want to push most of what I want to do to people. Once in a while I will - once in a while I will share something via Google+, or email it, or copy it to my wiki, or copy it to USEnet newsgroups like comp.arch.
But, these are dual. In an ideal world almost the same concepts should control access control and notifications.
Certain posts I may want to make publicly accessible, but not push to my Google+ stream. Or push to a narrow list of friends.
Confusingly, when I post to Blogger a Google+ window pops up and asks me if I want to Share via Google+. But this notion of "share" is notification. It is not to be confused with access control. AFAIK Blogger has no access control.
---
It's like preschool teachers, or the leaders of the psychobabble training sessions that Intel made us take: "Do you have anything you want to share with the group?"
Maybe... I'm willing to passively share lots of my thoughts. Give you access if you ask. But I am reluctant to push all of my thoughts to others, to actively share.
Why use blogger rather than Google+?
Waiting at carwash...
Why use blogger rather than Google+?
Mainly, I know how to export data from Blogger. I don't know how to grab all of my posts from Google+. (http://www.dataliberation.org/takeout-products/-1s says "At the moment, this only includes sites that you have +1'd (no posts)").
I.e. for some things exportability trumps convenience and access control.
Perhaps in 100 years we will have ubiquitous access control, exportability, etc. And then the differences between wikis and blogs and ... whatever Google+ is, a stream of comments ... will be just user interface and how the information is structured.
I'd like a world where the blog/wiki/stream boundaries are blurred. But I don't need.
Higher priority: it sucks that issues for things that should be ubiquitously available, like access control and exportability, are often the criteria for choosing which tools to use. I wonder how many good UI ideas are dying because they don't do the other stuff right? Certainly, the big reason why I use so many Google tools is that they at least show signs of doing the "utilities".
Why use blogger rather than Google+?
Mainly, I know how to export data from Blogger. I don't know how to grab all of my posts from Google+. (http://www.dataliberation.org/takeout-products/-1s says "At the moment, this only includes sites that you have +1'd (no posts)").
I.e. for some things exportability trumps convenience and access control.
Perhaps in 100 years we will have ubiquitous access control, exportability, etc. And then the differences between wikis and blogs and ... whatever Google+ is, a stream of comments ... will be just user interface and how the information is structured.
I'd like a world where the blog/wiki/stream boundaries are blurred. But I don't need.
Higher priority: it sucks that issues for things that should be ubiquitously available, like access control and exportability, are often the criteria for choosing which tools to use. I wonder how many good UI ideas are dying because they don't do the other stuff right? Certainly, the big reason why I use so many Google tools is that they at least show signs of doing the "utilities".
Friday, October 12, 2012
Rewriting history can be good (esp checkin messages)
It can be good to be able to rewrite history.
Or at least checkin messages - or, rather the text associated with a version.
Checkin messages are really what was entered at checkin time. But if these are rewritten, may warrant a different name.
Writing good checkin messages can be hard. Sometimes I hesitate to merge my changes because it will take too much time to write a good message - especially given our local rule of having a branch merge summarize all changes on the branch. But slow to integrate is bad. Or I write a sub-par checkin message. But that is also bad.
Better to merge, integrate asap, with whatever you can say at that time. And, if necessary, go back and rewrite the messages to improve them.
It's like refactoring.
Or at least checkin messages - or, rather the text associated with a version.
Checkin messages are really what was entered at checkin time. But if these are rewritten, may warrant a different name.
Writing good checkin messages can be hard. Sometimes I hesitate to merge my changes because it will take too much time to write a good message - especially given our local rule of having a branch merge summarize all changes on the branch. But slow to integrate is bad. Or I write a sub-par checkin message. But that is also bad.
Better to merge, integrate asap, with whatever you can say at that time. And, if necessary, go back and rewrite the messages to improve them.
It's like refactoring.
LOD branches (Lines of Development)
Not sure if this is a Glewism, although Brad Appleton's book undoubtedly similar.
A LOD branch is not like a task branch. A task brach ideally exists only briefly.
A LOD branch is long lived. It may be periodically synchronized with other OD branches, and/or the mainline of development. Bidirectionally synchronized - changes may be pushed and pulled.
For long periods of time a LOD branch may be collapsed - basically part of the main line of development. But it may be revived as a separate LOD branch.
Examples of LOD branches:
* branches for different platforms, where there is not a single source tree with ifdeffing or other conditionals
* maintenance branches for old major releases
* branches where you are working on a new experimental feature or system
A LOD branch is not like a task branch. A task brach ideally exists only briefly.
A LOD branch is long lived. It may be periodically synchronized with other OD branches, and/or the mainline of development. Bidirectionally synchronized - changes may be pushed and pulled.
For long periods of time a LOD branch may be collapsed - basically part of the main line of development. But it may be revived as a separate LOD branch.
Examples of LOD branches:
* branches for different platforms, where there is not a single source tree with ifdeffing or other conditionals
* maintenance branches for old major releases
* branches where you are working on a new experimental feature or system
Closing and merging branches
I like developing on branches. Task branches usually, short and sweet, merged back into the parent main line of development as soon as possible. Sometimes longer lived "line of development" branches, merging to and from the mainline. Named branches if I have had foresight, although Mercurial's anonymous branches are not so bad, although they can be misleading. (Wanted: retroactive renaming of branches.)
Just now closing a doubly nested task branch, merging into its parent line of development branch, and merging that into the default trunk.
Straightforward. But annoying in that I have to run tests at each stage of the merge. (Actually, I feel that I should run such tests - the project is not so well disciplined.) Plus check in three times: once for the last change in the task branch, once in the LOD branch, and once in the default trunk. With three separate checkin messages.
Some of this testing is unnecessary. E.g.
hg update -r LOD
// close and merge task branch into LOD branch
hg merge -r task-branch
make -j test
hg merge -r default
// LOD changes, but no changes wrt default. i.e. no LOD changes during task branch life.
make -j test <--- unnecessary, because tested on task branch
hg ci -m 'merged task branch'
// merge from mainline before pushing back
hg merge -r default
// no changes.
make -j test <--- unnecessary
hg ci 'updated LOD branch from default trunk main line of development'
// merge back into trunk
hg update -r default
hg merge -r LOD
// many changes wret default, but no changes wrt LOD
make -j test <--- unnecessary, since file contents same as LOD
hg ci 'updated default trunk main line of development from LOD branch ... need more description'
i.e. it is good to know when there have been no changes in the actual file contents wrt one of the parents.
unnecessary tests could be eliminated.
(unless, of course, the tests do stuff specific to whatever branch the workspace is on. Which we actually have a minor example of. :-( )
---
similarly for checkin messages:
we have the convention of having branch merge messages summarize everything done on the branch.
So above I end up writing almost the same checkin message twice. Which is more hassle than it sounds.
Just now closing a doubly nested task branch, merging into its parent line of development branch, and merging that into the default trunk.
Straightforward. But annoying in that I have to run tests at each stage of the merge. (Actually, I feel that I should run such tests - the project is not so well disciplined.) Plus check in three times: once for the last change in the task branch, once in the LOD branch, and once in the default trunk. With three separate checkin messages.
Some of this testing is unnecessary. E.g.
hg update -r LOD
// close and merge task branch into LOD branch
hg merge -r task-branch
make -j test
hg merge -r default
// LOD changes, but no changes wrt default. i.e. no LOD changes during task branch life.
make -j test <--- unnecessary, because tested on task branch
hg ci -m 'merged task branch'
// merge from mainline before pushing back
hg merge -r default
// no changes.
make -j test <--- unnecessary
hg ci 'updated LOD branch from default trunk main line of development'
// merge back into trunk
hg update -r default
hg merge -r LOD
// many changes wret default, but no changes wrt LOD
make -j test <--- unnecessary, since file contents same as LOD
hg ci 'updated default trunk main line of development from LOD branch ... need more description'
i.e. it is good to know when there have been no changes in the actual file contents wrt one of the parents.
unnecessary tests could be eliminated.
(unless, of course, the tests do stuff specific to whatever branch the workspace is on. Which we actually have a minor example of. :-( )
---
similarly for checkin messages:
we have the convention of having branch merge messages summarize everything done on the branch.
So above I end up writing almost the same checkin message twice. Which is more hassle than it sounds.
Thursday, October 11, 2012
emacs comes up with all characters boxes
This blog item not really a publication, just to record a stupid thing. This blog is the easiest place I know to record little workarounds, etc. It's indexed. Plus, no harm in others seeing. And occasionally they tell me better ways to do things.
Recently have been plagued by emacs come up in VNC with all characters looking like boxes.
This requires xfs, the X Font Server, to be reset.
Error Messages in the VNC Logfiles
Last time I went looking for log messages:
_XSERVTransmkdir: Owner of
/tmp/.X11-unix should be set to root
Xvnc Free Edition 4.1.2
Copyright (C) 2002-2005
RealVNC Ltd.
See http://www.realvnc.com for information on
VNC.
Underlying X server release
70101000, The X.Org Foundation
Tue Jan 3 10:29:51 2012
vncext:
VNC extension running!
vncext:
Listening for VNC connections on port 5904
vncext:
Listening for HTTP connections on port 5804
vncext:
created VNC server for screen 0
Could not init font path
element unix/:7100, removing from list!
Warning: Cannot convert
string "nil2" to type FontStruct
Tue Jan 3 10:30:13 2012
Connections: accepted:
192.168.4.92::59724
SConnection: Client needs
protocol version 3.8
SConnection: Client requests
security type VncAuth(2)
Tue Jan 3 10:30:19 2012
VNCSConnST: Server
default pixel format depth 16 (16bpp) little-endian rgb565
VNCSConnST: Client
pixel format depth 8 (8bpp) rgb max 3,3,3 shift 4,2,0
Tue Jan 3 10:30:20 2012
VNCSConnST: Client
pixel format depth 16 (16bpp) little-endian rgb565
Related errors: missing lucida, courier, etc. fonts.
How to reset XFS
From IT (months ago)
for future reference,
if it happens again, check your vnc log (should be something like ~/.vnc/mipscs587:1.log)
for an error like:
Could not init font path element unix/:7100, removing from list!
if you see this, you need to run, as root:
/etc/init.d/xfs restart
then restart your vncserver session
Could not init font path element unix/:7100, removing from list!
if you see this, you need to run, as root:
/etc/init.d/xfs restart
then restart your vncserver session
Rewriting history makes you look good
One thing I dislike about rewriting history in version control systems: it makes you look smarter than you are.
Let's imagine that you are developing a feature F. So, starting from the trunk, you do it on a branch
trunk -> F1 -> F2 -> ...
Then you realize there is a bug in the trunk is something you depend on. You fix that, but still on your branch
trunk -> F1 -> F2 -> bugfix_in_F
You want to put that bugfix onto the trunk as soon as possible, so you apply t to the trunk. Now we really have a branch:
trunk ---------------------------------> bugfix
\ /
+->F1 -> F2 -> bugfix_in_F ->+
Now, it looks like you were smarter than you actually were: the bugfix happened out of the blue.
Worse... working with a group like my present project that is not test-driven, you may not have a failing test for the bugfix. Or, the failing test for the bugfix may only be integrated when you get the feature.
Sure, don't do that. I wish my project was test driven. But ...
Must rewrite Michael Feathers' Legacy Code book.
Let's imagine that you are developing a feature F. So, starting from the trunk, you do it on a branch
trunk -> F1 -> F2 -> ...
Then you realize there is a bug in the trunk is something you depend on. You fix that, but still on your branch
trunk -> F1 -> F2 -> bugfix_in_F
You want to put that bugfix onto the trunk as soon as possible, so you apply t to the trunk. Now we really have a branch:
trunk ---------------------------------> bugfix
\ /
+->F1 -> F2 -> bugfix_in_F ->+
then eventually you merge the feature
trunk ---------------------------------> bugfix -----> F
\ / /
+->F1 -> F2 -> bugfix_in_F ------> F3 -> F4
\ / /
+->F1 -> F2 -> bugfix_in_F ------> F3 -> F4
on the trunk the bugfix and the feature appear out of order - at least, out of historical order. But at least you can see the way things were really done.
(Of course, Mercurial doesn't really support moving a bugfix like this from a branch to the trunk. It's a partial merge. Hint: don't use hg merge - that's a full merge - you'll get the feature F1 and F2 as well as the bugfix. And if you undo that stuff before checking in the isolated bugfix, mercural will remember that, and make it hard to merge the feature later.
If you are lucky, the partial merge is at file granularity. You can do
hg update -r default; hg revert -r FeatureBranch file-that-was-bugfixed.cpp; test ...
If less lucky, merge by hand, cherrypick, edit patch files. ...
Or just redo.
)
But apart from Mercurial's not supporting partial merges,
what I really dislike is stripping the branch away from the history,
leaving
trunk ---------------------------------> bugfix -----> F
Now, it looks like you were smarter than you actually were: the bugfix happened out of the blue.
Worse... working with a group like my present project that is not test-driven, you may not have a failing test for the bugfix. Or, the failing test for the bugfix may only be integrated when you get the feature.
Sure, don't do that. I wish my project was test driven. But ...
Must rewrite Michael Feathers' Legacy Code book.
Wednesday, October 10, 2012
parameter list would like to have special name resolution
Have you ever seen something like
Some_Class::some_static_function(Some_Class::Enum_Value)
or
object_of_some_class.method(Some_Class::Enum_Value)
It would be nice if the Some_Class:: prefix were inferred or implicit in the parameter list.
Some_Class::some_static_function(Some_Class::Enum_Value)
or
object_of_some_class.method(Some_Class::Enum_Value)
It would be nice if the Some_Class:: prefix were inferred or implicit in the parameter list.
Tuesday, October 09, 2012
Why rewrite history? Why disentangle?
Why ever rewrite history in a version control system?
E.g.
Because in real life, the actual development history, changes get entangled. Not nicely
A -> B -> C
but instead
A1 -> B1 -> C1 -> B2 -> A2 -> B3 -> C2 -> ...
which we might like to see reordered as
A1 -> A2 -> B1 -> B2-> B3 -> C1 -> C2
and smashed to
A -> B -> C
A = A1 -> A2, etc.
If the changes, patches, are operators that commute Darcs-style, all well and good. But many patches don't commute - logically they should, but in actuality do not.
--
Note that this entangling can be at file granularity, but is more painful wen within the same file. Worse still if in the same line of code.
Hey - "entanglement". Darcs is inspired by physics, right?
--
OK, so we want to rewrite history. But the danger in rewriting history is that we might not end up where we want to.
More and more I am thinking that it woyuld be a good idea to rewrite history in a lattice defined by the actual history.
E.g. if we have (bolding the stats, using dA1B2 etc to indicate differences)
0 -d0A1-> A1 -dA1B1-> B1 -dB1C1-> C1 -dC1B2-> B2 -dB2A2-> A2 -dA2B3-> B3 = FINAL
and we want
0 -d0A-> A -> dAB-> B -dBC-> C = FINAL
then we should constrain the rewritten history to arrive at the same final state.
I.e. the actual history and the rewritten, virtual, for understanding history should be considered alternate paths to the same final state.
Imposing this constraint may be helpful in history rewriting tools.
Representing these alternate paths may also be helpful. Sure, present the elegant history that Linus wants, but also preserve the grotty history that... well, historians like me want.
---
When rewriting history manually I find that the FINAL state keeps changing. Often rewriting history exposes issues that were not seen in the original path.
E.g.
http://www.fossil-scm.org/xfer/doc/trunk/www/fossil-v-git.wiki:Basically, to make things understandable.
Linus Torvalds does not need or want to see a thousand different branches, one for each contributor.
Linus has a longer post somewhere, along the lines of "the history should show how the code should have been written, not how it actually was written".
Because in real life, the actual development history, changes get entangled. Not nicely
A -> B -> C
but instead
A1 -> B1 -> C1 -> B2 -> A2 -> B3 -> C2 -> ...
which we might like to see reordered as
A1 -> A2 -> B1 -> B2-> B3 -> C1 -> C2
and smashed to
A -> B -> C
A = A1 -> A2, etc.
If the changes, patches, are operators that commute Darcs-style, all well and good. But many patches don't commute - logically they should, but in actuality do not.
--
Note that this entangling can be at file granularity, but is more painful wen within the same file. Worse still if in the same line of code.
Hey - "entanglement". Darcs is inspired by physics, right?
--
OK, so we want to rewrite history. But the danger in rewriting history is that we might not end up where we want to.
More and more I am thinking that it woyuld be a good idea to rewrite history in a lattice defined by the actual history.
E.g. if we have (bolding the stats, using dA1B2 etc to indicate differences)
0 -d0A1-> A1 -dA1B1-> B1 -dB1C1-> C1 -dC1B2-> B2 -dB2A2-> A2 -dA2B3-> B3 = FINAL
and we want
0 -d0A-> A -> dAB-> B -dBC-> C = FINAL
then we should constrain the rewritten history to arrive at the same final state.
I.e. the actual history and the rewritten, virtual, for understanding history should be considered alternate paths to the same final state.
Imposing this constraint may be helpful in history rewriting tools.
Representing these alternate paths may also be helpful. Sure, present the elegant history that Linus wants, but also preserve the grotty history that... well, historians like me want.
---
When rewriting history manually I find that the FINAL state keeps changing. Often rewriting history exposes issues that were not seen in the original path.
0 -d0A1-> A1 ... -> B3 = FINAL -dPostFinal-> FINAL'
------d0A-> A -----> dAB-----> B ---------dBC-----> C = FINAL'
A moving target, perhaps, but still alternate paths to the same ultimate final state.
\ \
------d0A-> A -----> dAB-----> B ---------dBC-----> C = FINAL'
A moving target, perhaps, but still alternate paths to the same ultimate final state.
How to do partial merges and cherrypick in Mercurial
Officially, you cannot. Mercurial has no formal supports for partial merges and cherrypicking.
Actually, there are a few history editors that let you select thunks to apply. Plus, I have not yet really tried out mq, the quilt extension. So far my experience with Mercurial history editing tools is hit and miss, mainly miss. They mostly fail in the presence of merges, anything other than a simple linear divergence.
But you can do much by hand.
You can "cherry pick" individual files by using hg revert:
hg diff -r branch1 -r branch2
// look at diffs, figure out files that can be moved independently
hg update -r branch1
hg revert -r branch2 file1 file2
// test and then
hg ci
I find the name "hg revert" very strangely named. I think of revert as going back to an earlier version, going backwards, not makuing progress. But once you get past this, ok.
Merging changes in the midle of a file, interleaved with other changes, is more of a pain. Basically, edit the diff to create a patch file. Various tools can help.
The biggest problem with fdoing this by hand is that Mercurial does not really understand that you have done a partial merge. It is not recorded in the history. Revsets exprssions cannot be used to look at the ancestors or merges.
Actually, there are a few history editors that let you select thunks to apply. Plus, I have not yet really tried out mq, the quilt extension. So far my experience with Mercurial history editing tools is hit and miss, mainly miss. They mostly fail in the presence of merges, anything other than a simple linear divergence.
But you can do much by hand.
You can "cherry pick" individual files by using hg revert:
hg diff -r branch1 -r branch2
// look at diffs, figure out files that can be moved independently
hg update -r branch1
hg revert -r branch2 file1 file2
// test and then
hg ci
I find the name "hg revert" very strangely named. I think of revert as going back to an earlier version, going backwards, not makuing progress. But once you get past this, ok.
Merging changes in the midle of a file, interleaved with other changes, is more of a pain. Basically, edit the diff to create a patch file. Various tools can help.
The biggest problem with fdoing this by hand is that Mercurial does not really understand that you have done a partial merge. It is not recorded in the history. Revsets exprssions cannot be used to look at the ancestors or merges.
rewriting history means rerunning tests - for all versions in new history?
Something that annoys me about rewriting history in a version control system like Git or Hg, e.g. rebase, is that strictly speaking one should rerun all tests on all of the new versions.
E.g. if you had
trunk: A->B->C
and made your edits
trunk: A->B->C->X->Y->Z
and eventually wanted to merge into history from elsewhere
E.g. if you had
trunk: A->B->C
and made your edits
trunk: A->B->C->X->Y->Z
and eventually wanted to merge into history from elsewhere
trunk: A->B->C->D->E
by rebasing
trunk: A->B->C->D->E->X'->Y'->Z'
then you *might* want to rerun tests on all of X', Y, and Z' -- not just on the final Z'.
Because while X' *should* be similar to X, except with D and E's changes, hopefully non conflicting - sometimes there will be conflicts that are not detected by the merge, only by running tests. Interferences.
I.e. sometimes X' will break tests, although X dos not, and although Z' does not either.
---
Of course, this only matters if your project takes the "All checkins should pass all tests" approach. The approach that simplistic bisect uses.
It doesn't matter so much if you go back and retroactively label versions according to what tests have passed, etc. (Which, of course, Mercurial cannot do worth a darn.)
then you *might* want to rerun tests on all of X', Y, and Z' -- not just on the final Z'.
Because while X' *should* be similar to X, except with D and E's changes, hopefully non conflicting - sometimes there will be conflicts that are not detected by the merge, only by running tests. Interferences.
I.e. sometimes X' will break tests, although X dos not, and although Z' does not either.
---
Of course, this only matters if your project takes the "All checkins should pass all tests" approach. The approach that simplistic bisect uses.
It doesn't matter so much if you go back and retroactively label versions according to what tests have passed, etc. (Which, of course, Mercurial cannot do worth a darn.)
Gosh darn, I want hg push messages
Gosh darn I want hg push messages.
I want to be able to push, and say something like "Everything in this changegroup is on a branch. Don't panic. It is being cleaned up before it gets merged into the default trunk."
I want to be able to push, and say something like "Everything in this changegroup is on a branch. Don't panic. It is being cleaned up before it gets merged into the default trunk."
Exceptions that pop the stack versus occurring WITHIN the stack frame
In a high reliability situation where you might want to throw typed errors that are easily parsed so that you can mitigate the problem ... in such a situation, would you not at least like to throw the error WITHIN the context so that you can return to the erroring instructions? E.g. open a file if necessary, fix a permission problem by asking the user, etc.
I.e. exceptions that can fix up, and return, as if the exception never occurred.
The other sort of exception, the C++ style exception, pops/unwinds the stack. And can't return. The only way that you can handle such errors is to have the calling functions loop:
Hmm.. might call this PSH versus POP exception handling.
outer() {
do {
bool failed = 0;
try {
inner();
}
catch(...) {
// attempt fixup and repeat
failed = 1;
}
} while( failed );
}
Imagine if you had to handle page faults in this manner? Actually, you don't need to imagine - just look at how stack probes in old UNIX shells used to work.
This seems to be the criterion:
* if exception handling can be transparent to the calling code, you want WITHIN handling
* if exception handling requires the cooperation of the calling code, you want POP handling.
Examples of possibly transparent handling:
* page faults, where VM system can bring in
* removable disk or tape not mounted
* OS asking the user "are you sure?" before doing something that might be a security issue
* emulating unimplemented operations or instructions
Examples of non-transparent:
* stack overflow - you can't PUSH onto the present context, because there is no room left
There's a case for both. And, certainly, if PUSH is provided, there must be a way so that it can POP.
--
Older OSes like DEC VMS(?) reportedly provided both.
Obviously OSes provided PUSH exception handling for stuff like virtual memory.
But modern languages like C++ have definitely tended towards POP or UNWIND exception handling. If you can call C++ modern. (Q: Java? Javascript?)
--
It almost seems that PUSH is associated with change of privilege, while POP is not. Perhaps. But, I would like to be able to provide things like user mode page fault handlers, or user mode integer overflow handlers. That is only change of privilege is you have fine grain privileges, more fine than modern OSes provide.
Garbage collection makes it more practical to return good error messages
Elsewhere I have discussed how I buy into Andrei Alexandresciu and D's contention that throwing C++ style exceptions is the best way to signal errors, since a programmer cannot accidentally or deliberately forget to handle an error. My preference is to throw generic string error messages, unless I am in a high reliability situation where errors will be parsed and try to be cured, since you can then at least provide a context,. or stack, of error messages.
But... languages with garbage collection do remove one objection to signalling errors by a return code. One problem with return codes is that they are so cryptic. Integers. Ugh. But if you can construct a meaningful string and return that, then the user at least has the option of printing a meaningful message.
e.g.
FUNCTION foo(int bar, char* file) RETURNS char* error_msg;
if( char* error_msg = foo(42,"bazz") ) {
std::cerr << "call to foo got error message: " << error_msg << "\n";
}
versus
foo(42,"bazz"); /// error code is ignored
At least with GC this pattern does not cause a memory leak.
---
I still prefer throwing C++ style exceptions, though.
But... languages with garbage collection do remove one objection to signalling errors by a return code. One problem with return codes is that they are so cryptic. Integers. Ugh. But if you can construct a meaningful string and return that, then the user at least has the option of printing a meaningful message.
e.g.
FUNCTION foo(int bar, char* file) RETURNS char* error_msg;
if( char* error_msg = foo(42,"bazz") ) {
std::cerr << "call to foo got error message: " << error_msg << "\n";
}
versus
foo(42,"bazz"); /// error code is ignored
At least with GC this pattern does not cause a memory leak.
---
I still prefer throwing C++ style exceptions, though.
Monday, October 08, 2012
Notifications not just for beginning but also for end of event
My daughter has an activity that my wife drops her off at, and from which I pick her up at the end.
Most calendar programs provide notifications and alarms only for the start of meetings and activities.
Now, one could (and historically I have) create separate events for the beginning and end. But this leads to inconsistencies, e.g. when the activity time is changed, but the pickup event is not changed.
Idea: provide multiple alarms or notifications for events, not just at start, but also at end.
Actually, more like a compound calendar item:
1) the activity or duration
2) the event items for my wife to take our daughter to the activity
3) the event item for me to leave wherever I am leaving from (work, home - lead time depends on were I am, and that should also be automated) and pick Sophie up.
Scheduled relative to the END of the event.
etc.
Cancelling such a compound event removes all.
Moving, changing the time - may want to query. If rescheduled, I may end up dropping off, and my wife may end up picking up.
---
May also want internal times, not just beginning and end.
E.g. for an all day event, e.g. my wife and daughter at one day of a multi day folk music festival, I may be able to attend only one lunch hour.
===
This general insight - that it is almost as important to schedule and remind yourself of when an activity should end as begin - is, I regret, somewhat new to me. It is implicit is stuff like Pomodoro scheduling. But I am only just now beginning to think of it explicitly.
Simple thing: I am trying to schedule an alarm on my Android device for the next time I should look up.
Now, which of the umnpteen alarm programs should I use...?
Most calendar programs provide notifications and alarms only for the start of meetings and activities.
Now, one could (and historically I have) create separate events for the beginning and end. But this leads to inconsistencies, e.g. when the activity time is changed, but the pickup event is not changed.
Idea: provide multiple alarms or notifications for events, not just at start, but also at end.
Actually, more like a compound calendar item:
1) the activity or duration
2) the event items for my wife to take our daughter to the activity
3) the event item for me to leave wherever I am leaving from (work, home - lead time depends on were I am, and that should also be automated) and pick Sophie up.
Scheduled relative to the END of the event.
etc.
Cancelling such a compound event removes all.
Moving, changing the time - may want to query. If rescheduled, I may end up dropping off, and my wife may end up picking up.
---
May also want internal times, not just beginning and end.
E.g. for an all day event, e.g. my wife and daughter at one day of a multi day folk music festival, I may be able to attend only one lunch hour.
===
This general insight - that it is almost as important to schedule and remind yourself of when an activity should end as begin - is, I regret, somewhat new to me. It is implicit is stuff like Pomodoro scheduling. But I am only just now beginning to think of it explicitly.
Simple thing: I am trying to schedule an alarm on my Android device for the next time I should look up.
Now, which of the umnpteen alarm programs should I use...?
Friday, October 05, 2012
A better solution than to_string(T) versus operator<<( stream,T)
to_string() versus stream operator<< ? A commonly occurring quandary for C++ programmers.
For large objects it is often just plain more efficient to write directly to a stream, e.g. output, or write to disk, than it is to write first to a string and then print it.
But it is often nice to be able to take the string representation, and then manipulate it.
I.e. sometimes you want to write to a stream. Sometimes to a string. And, yes, I know about ostringstream. Also ostrstream. (The ostringstream/ostrstream thrashing in the C++ standard s one reason why this issue was not resolved long ago.)
Also... sometimes the streams << notation is easier to use than the string concatenation operarir +. Especially since << often has implicit operators to convert to numerucs etc. to string and then print, whereas you usually don't want to overload operator+(string,T) because that can cause bugs.
---
All of that boils down to: should you provide:
string& to_string(const T& val);
or
std::ostream& operator(ostream& ostr, const T& val);
With the additional caveat that one is often misled by
string& T::to_string() const;
whereas
class T {
public: friend std::ostream& operator<<(ostream& ostr,const T& val);
---
Q: who do I give credit to?
---
For older C-style programming: print to stream, versus to string? Same issue, but the C++ streams syntax makes it more pleasant.
int main()
{
Foo a, b;
a.bazz = 0xAAA;
b.bazz = 0xBBB;
std::cout << "stream: " << Foo::Formatter(a) << Foo::Formatter(b) << "\n";
{
std::string s;
std::ostringstream sostr(s);
sostr << "ostringstream: " << Foo::Formatter(a) << Foo::Formatter(b) << "\n";
std::cout << sostr.str();
}
{
std::string s = Foo::Formatter(a);
std::cout << "string = f: " << s << std::endl;
}
#if 1
{
std::ostringstream s;
s << Foo::Formatter(a);
s << Foo::Formatter(b);
std::cout << "string; <<; <<: p="p" s="s" std::endl="std::endl"> }
{
std::string s = Foo::Formatter(a) + Foo::Formatter(b);
std::cout << "string = f + f: " << s << std::endl;
}
#endif
{
std::string s = Foo::Formatter(a) + Foo::Formatter(b);
std::cout << "string = f + f: " << s << std::endl;
}
{
std::string s = "string = s + f: " + Foo::Formatter(b);
std::cout << s << std::endl;
}
{
std::string s = Foo::Formatter(b) + ": string = f + s";
std::cout << s << std::endl;
}
}
For large objects it is often just plain more efficient to write directly to a stream, e.g. output, or write to disk, than it is to write first to a string and then print it.
But it is often nice to be able to take the string representation, and then manipulate it.
I.e. sometimes you want to write to a stream. Sometimes to a string. And, yes, I know about ostringstream. Also ostrstream. (The ostringstream/ostrstream thrashing in the C++ standard s one reason why this issue was not resolved long ago.)
Also... sometimes the streams << notation is easier to use than the string concatenation operarir +. Especially since << often has implicit operators to convert to numerucs etc. to string and then print, whereas you usually don't want to overload operator+(string,T) because that can cause bugs.
---
All of that boils down to: should you provide:
string& to_string(const T& val);
or
std::ostream& operator(ostream& ostr, const T& val);
With the additional caveat that one is often misled by
string& T::to_string() const;
whereas
class T {
public: friend std::ostream& operator<<(ostream& ostr,const T& val);
}
is less misleading.
---
Here's a better way that accomplishes both ends:
class T {
public:
class Formatter {
const T& m_val;
XXX m_extra_stuff; // ...extra paramters, like format directives
public:
friend::ostream& operator<<(std::stream&, Formatter);
Formatter(const T& arg_val, XXX arg_extra_stuff = default)
: m_val(arg_val), m_extra_stuff(arg_extra_stuf)
{}
};
Used
std::cout << Formatter(t_object) << "\n";
Or the like.
It's a little bit clunkier to get a string, not quite as elegant as
std::string s = t_object->to_string();
Something like:
std::string s;
std::ostringstream(s) << Formatter(t_object) << "\n";
But perhaps some creative overloading will allow
std::string s = Formatter(t_object) << "\n";
or similar.
---
Q: who do I give credit to?
---
For older C-style programming: print to stream, versus to string? Same issue, but the C++ streams syntax makes it more pleasant.
To String
class Foo {
public:
unsigned bazz;
public:
class Formatter {
private:
const Foo& m_objref;
public:
Formatter(const Foo& arg_objref) : m_objref(arg_objref) {}
friend std::ostream& operator<<(std::ostream& ostr, const Formatter f) {
ostr << std::hex;
ostr << f.m_objref.bazz;
ostr << std::dec;
return ostr;
}
operator std::string() const {
std::ostringstream os;
os << *this;
return os.str();
}
friend std::string operator+(const std::string& s, const Formatter f)
{
return s + (std::string)f;
}
friend std::string operator+(const Formatter f, const std::string& s)
{
return (std::string)f + s;
}
friend std::string operator+(const Formatter f1, const Formatter f2)
{
return (std::string)f1 + (std::string)f2;
}
};
};
int main()
{
Foo a, b;
a.bazz = 0xAAA;
b.bazz = 0xBBB;
std::cout << "stream: " << Foo::Formatter(a) << Foo::Formatter(b) << "\n";
{
std::string s;
std::ostringstream sostr(s);
sostr << "ostringstream: " << Foo::Formatter(a) << Foo::Formatter(b) << "\n";
std::cout << sostr.str();
}
{
std::string s = Foo::Formatter(a);
std::cout << "string = f: " << s << std::endl;
}
#if 1
{
std::ostringstream s;
s << Foo::Formatter(a);
s << Foo::Formatter(b);
std::cout << "string; <<; <<: p="p" s="s" std::endl="std::endl"> }
{
std::string s = Foo::Formatter(a) + Foo::Formatter(b);
std::cout << "string = f + f: " << s << std::endl;
}
#endif
{
std::string s = Foo::Formatter(a) + Foo::Formatter(b);
std::cout << "string = f + f: " << s << std::endl;
}
{
std::string s = "string = s + f: " + Foo::Formatter(b);
std::cout << s << std::endl;
}
{
std::string s = Foo::Formatter(b) + ": string = f + s";
std::cout << s << std::endl;
}
}
This is painful enough that I would like top make it a mixin.
Subscribe to:
Posts (Atom)