Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Wednesday, November 01, 2006

eBeam

Yet another vain attempt to use Dave Sager's eBeam device to record my whiteboard scribblings.

By now, this device is so old that it would probably be best to buy a new one. Yet I cannot get buying a newone approved, because nobody ever really used the old one. Sager got it working, but its usage peetered out. I have never been able to get it working at all on my laptops.

It's a pity. I would love to have a way of recording what is written on a physical whiteboard. At the moment, it looks like a photograph is the best way... although I would love to record the time series.

Monday, October 16, 2006

Campaign Finance Reform - Not Limit - Matching Funds for "Equal Time"

Topher:
As I listened to yet another of the perpetual discussions about Campaign Finance Reform - what is it, Oregon Measure 47? - I came up with the following idea. It's new to me.
Instead of trying to limit campaign contributions, which has all sorts of free speech implications, how about encouraging them?
But, tax them in the following way:
If somebody contributes N dollars to a political purpose, require that a matching amount f*N dollars be placed in a fund for "equal time".
The "equal time" funds would be distributed as follows: everyone - any citizen, any voter, whatever - gets an equal share of the "equal time" fund. He or she can assign his or her share of the funds to whatever cause or organization he or she wishes.
E.g.
* some might assign all of their funds to their party, Democratic or Republican or ...
* some might assign their funds to their church
* I would probably assign my share to a "Contrarian" organization, dedicated to debunking the top 10 mistruths publicized by the original donation. E.g. I would fund counter attacks against the Small Boat Veterans for Truth in the Kerry campaign, but equally fund counter attacks against Democrat mistruths.
Bottom Line: the "equal time" funds would be distributed democratically. Each person would control his share of the money.
The ratio f might be set equal to 1; it might even be larger than one. Without loss of generality, let us talk as if f=1.
In such a situation, half of the money in politics would be distributed as it is now, according to the incliniations of the wealthy; but half would be distributed on a per capita basis. A plutocrat seeking to advance his cause would have to balance his expenditure against the consideration that he is also financing his opponents.
The downside, of course, is that the "equal time" funds are distributed proportionately. But it's like Churchill said: "Democracy is the worst form of government, except for all the others."
One might even be able to have a graduated scale for the "equal time" funds: the poor might not have to match such funding, but the rich might. If you are a progressive, though, beware of what you ask for: the Republican party has many more small contributirs than does the Democratic party, which is funded mainly by a small number of wealthy individuals (and then unions).
I think this scheme might be able to meet US Supreme Court standards: it is not restricting free speech. It is just taxing it; and taxing it in a way that is eminently neutral. All speech gets taxed; and the taxes on free speech encourage still more free speech. By the way, that is not "free as in beer" free speech, eh?
Lest it become too dificult to tell what is a political contribution ad what is not, how about making this just be a tax on ALL advertizing, commercial, politucal, or otherwise?

Colored Zips

I was just fumbling with my rollaboard - the sort of carryon luggage one rolls through airports, designed to fit into an airplane's overhead storage. I am now rolling one around at work, since I am carrying my bicycle gear - helmet, jacket, etc. - and I often do not have time to go to my office.

Anyway, I was fumbling with my rollaboard, looking for the pocket into which I had placed my lunch. 6 choices, different sized pockets. I ended up opening them all.

Although size and position of the zipped pocket should be a clue, how about another:

I seem to remember colors very well. If the pockets all had zips of different colors, it might make it easier to remember which pocket you had slipped something into.

Monday, September 18, 2006

Ideal Communication Tool

On 9/17/06, Dean Kent wrote:
So - let me ask you a question: Do you have an idea of what would be your
ideal tool for all your communication needs? Without sticking to what
exists today. For example, if everyone on Usenet would instantly convert to
this new tool as well as everyone else you currently communicate with, or
download information from, etc. What kind of things would such a tool need
to do in order to satisfy you (Yes, I read your post all the way through,
but it seems like much of it was based upon 'what is', rather than 'what
would be ideal'.)

Just curious... (of course, you can post the answer to comp.arch if you
wish, rather than directly to me - if you desire to answer it at all).

I may post it to my blog, but it's off-topic for comp.arch. Not sure what USEnet group it would be on-topic for.

First off, "without sticking to what exists today" is impractical. The purpose of a communication tool is communication - the best tool is useless if there is nobody to talk to. So, a first requirement is that the tool talk all of the standard protocols - email, nntp, IM, RSS, etc.

Let's start off with the text oriented tools:
I/O: email, USEnet news, blogging, IM'ing
I was about to put RSS down as input ony, but that is not true - I may have a wiki site that generates an RSS feed to others.

Output: one of my key desires is multimode multicast: you compose a block of text. Maybe with graphics - your favorite format, XML, with graphic SVG, bitmaps, etc. You want to be able to send it or post it to any subset of selected services. NOT you cut and paste it from one to another, e.g. write email and cut and paste it to post in your wiki. Instead, you write the text once, and then you can set one or all of the following:
Email: To, CC, and Bcc
USEnet newsgroups to post to
Blog sites to post to
e.g. I often want to post the same item to my inside-the-company internal blog and my external blogger blog
Wiki sites to post to
wikis are a bit odd in that they are context sensitive - typically you click on a page.
but nevertheless I find that I often need the same stuff posted in more than one wiki
Similarly, many wiki pages contain text boxes and the like, such as TWiki's comment.
All of these are really separate potential output streams
IM conversations to post to
Yes, occasionally I am in 3 or 4 IM conversations at once - not the same conversation, but several separate
- and I want to post something to more than one
FAX: send to fax
SMS: send to cell phone text

I want to be able to create a "list" which consists of any or all of the above.

Note: above I am just talking about a user interface for media. I am not designing new media. Let's do a media mode taxonomy, and see if anything is missing:
Email: transient, deferred, disconnected, point to point
Blog: transient, semi-permanent, but temporal; deferred, disconnected, broadcast, one to many
IM: immediate, point to point (with multicast); transient, temporal
Wiki: permanent, reference - not intened to be temporal


Out of all of these, I think that what is missing are tools to assist you in cretaing both the transient, temporally contextual, stuff, as well as the current latest reference stuff. E.g. I was in a meeting this AM. Rajeev wrote up minutes: I updated our POR (Plan of Record) document. It would be nice if we could get both at once.

I guess that I am really vocifereously arguing against having to do things more than once.

Also... I think that everybody, every individual, needs a personal issue tracking system. Something to keep track of the highest priority thing to do next.

Note that YOU said "communications system". I said "communications and information management system". Communication is just one form of information. Memory is another. I want my communications, both incoming and outgoing, to be prioritized in the same list as my To-do items.

Most issue tracking systems are too heavyweight. You have to create an issue report. I want to be able to just say "Action Required" in a document, e.lg. in an outgoing email, and I want an entry in the issue tracker to be automatically created. (E.g. with a pointer back to the orignal document, possibly with the surrounding paragraphs extracted to explain the issue.

Similarly "Action Required By YYYY MM DD" should automatically do the date extraction thing. As should PORs, etc.

The personal information management system should track resources. So, e.g., if you commit to 3 days of work here, and 4 there, in the next 5 days, it should warn you of an overcommit.

... But you asked about communications, not calendaring, so I will not go too far down this path in this email.

Let's return to communication. We have already talked about multimodal multicast output. Now let's talk about input:


Basically, I see all of these separate input streams - email, RSS, IM, phone, etc.

I want my filtering going on, prioritizing. E.g. any communication from my wife or boss takes priority, no matter the input channel.

I want the same reader interface to be used for all inputs. Certainly easy enough to see for textual. For recorded voice or video, similarly easy.

For live voice or video, may be harder - you snooze, you lose. Although that's actually part of what I want to avoid. Record everything - all text, all voice, all visual inputs. If you are listening in on 2 phone meetings simulaneously while listening to email, have the ability to play back the last 2 minutes when something catches your ear. Compress... it's fairly easy to compress voice into 1/3rd the elapsed time. So, do small scale time shifting and scaling - listen in a minute or so behind. With this, you may actually be able to listen to several meetings all at once, without losing anything.

Speech to text - its a help. I can manage 4 IM conversations, but only 2 phone meetings.

Enough for now. I will now cut and paste this to my blog.

Saturday, September 16, 2006

Refactoring returns by introducing booleans

Consider code that looks like:
  1. void foo() {
  2. if( some-condition ) return;
  3. if( some-other-condition ) { do-something; return; }
etc.

Say this code is replicated in several places. Obviously we want to extract it into a common subroutine or class - the extract method refactoring.

If you can jump right to the end of the refactoring, great. But, I admit that I have occasionally tripped myself up doing this. (The example is, of course, oversimplified.)

So, here's a micro-refactoring that can help:

Extract to a function that returns a bool, named check_and_do_some_things_and_maybe_return()


  1. void foo() {
  2. if( check_and_do_somethings_and_maybe_return() ) return;
  3. ...

  1. bool check_and_do_somethings_and_maybe_return() {
  2. if( some-condition ) return 1;
  3. if( some-other-condition ) { do-something; return 1; }
  4. return 0;
  5. }
etc.

This requires editing each return.

If you want to avoid that, maybe add a boolean temporary to the original code - instead of returning immediately, delay all of the returns to

  1. void foo() {
  2. bool return_early = false;
  3. if( some-condition ) return_early=true;
  4. if( !return_early ) {
  5. if( some-other-condition ) { do-something; return_early = true; }
  6. if( !return_early ) ...
  7. }
  8. if( return_early ) return;
THEN you can safely extract the method a line or statement at a time.

Company abandons internal news servers; thoughts about PC versus Google

Minorly off-topic, but I feel impelled to note that _I_n_t_e_l_ has just ZBB'ed its internal NNTP news servers. Actually, they were ZBB'ed many years ago, but volunteers kept them going. Those volunteers may now be ZBB'ed. New volunteers may arise; heck, I may; but the path of least resistance is to give up on getting USEnet news inside the company, and go to some external service. E.g. today I am posting from Google Groups.

Personal relevance to comp.arch: my employer's internal news servers have been my main connection to comp.arch since 1991. Brief exceptions while I was in Madison and at AMD. Prior to joining _I_n_t_e_l_ I participated in comp.arch and its predecessor net.arch on news servers from the University of Illinois and from Gould and Motorola. I still maintain that I learned more computer architecture from comp.arch than I did in any school; moreover, I am fairly confident that I would never have gotten my job with _I_n_t_e_l_ without my comp.arch mediated acquaintance with Bob Colwell.

Generic relevance to comp.arch: this is a trend. Actually two trends.

Trend #1 is that less and less personal computing can be done at work, and that more and more work related computing is "freeloading" on personally paid for computing.

Most people used to have only one email address that they used for both work and personal matters. You can still do this, but it is becoming increasingly hard to do so because companies like my employer do not allow you to access the corporate network from your own computers; you can only do so from a company owned device.
So you have a personal mail service, as well as your work mail service. Maybe your personal mail servce is from an ISP, and changes whenever your ISP changes, you move, or when Qwest gets bought out by Verizon. Maybe you have your own domain.
But your company doesn't allow you to run POP across the firewall. Similarly for newsgroup access: your company desn't allow you to run NNTP across the firewall.

This leads to Trend #2: Google. More generically, the rebirth of "Big Iron", centralized, computer service companies.
Google *is* "Big Iron". Maybe not in the IBM mainframe sense, but anyone who has seen a Google machine room knows that it is a completely different scale than a desktop or laptop PC.
For many years I tried to keep my personal computing environment PC based. I ran my mail reader on my laptop or desktop PC, sometimes via a client technology such as POP, IMAP, sometimes peer-to-peer stuff like SMTP. Similarly, up until now I have read news on my laptop or desktop PC. When I saved a file, it was saved on my PC's hard disk. I could not access my environment of saved files and email without being on my PC. Maybe I could read my email from other computers, but I did not have my mailreading environment on those other computers, so I tried to avoid doing so.
But, not being able to access my personal email from work - no POP, no ssh - was the last straw. I switched to Google mail. Now I can access my personal email from any computer - at work, at home, from my wife's computer. From my relatives' computers. I no longer need to drag my personal laptop around with me.

Downside: I cannot access my Google email when I am not connected to the net. For many years this was the biggest reason that I stayed PC based. Broadband took a long time to get to many of the places where I spend time, like Oceanside, Oregon, and the Ottawa river valley in Canada. Broadband is still not available in many of my favorite places, such as Eastern Oregon. Heck, cell phone service is not available. (I am waiting for reports of the Microsoft/KVH mobile broadband with interest.)
Perhaps most important for business folk, I cannot access Google email on a plane, when I am not connected to the Internet.
Yes, I know: you can access Google mail via POP, downloading it to a mobile PC where you can read it disconnected. But that just puts you back in the "your mailreadimg environment lives on only one PC" mode. So far as I know, there is no way to download Google mail to your PC, and then upload back to Google any annotations, tags, classifications, and spam markings you have made to your email while disconnected.
I hope that Google will soon remedy this, and provide disconnected operation, not just for email, but also for other Google services such as Google groups.

Interestingly, moving to Google mail has provided more freedom from the point of view of form factor. In my "my mailreading environment lives on a single laptop PC" days, I needed to have a laptop that met my minimum needs for all common situations. E.g. it had to have a big enough screen, enough disk, and a keyboard. But now that I am Google based I can seriously consider reading email on a keyboardless tablet in my living room, or a PDA, or... Since I can always go to another device. I.e. I am more likely to buy a "widget" specialized computer now that I am using Google mail than I was when I used a PC.
I hypothesize that this is true not just of me, but also of other users. Perhaps the long awaited flowering of specialized devices for ubiquitous computing is now about to begin.

Terminology change: I used to read my mail on my laptop PC. Now I read it on Google, via a web browser that happens to run on a PC, but which could run elsewhere. I used to be a PC user. Now I am a Google user.

USEnet news is just another information service, like email. Same considerations apply. Since I have switched to Google mail, I might as well switch to Google groups. Ditto RSS, and other information services.
What I really want is to receive all of my information inputs in a common environment, that can seamlessly prioritize and sort email, USEnt news, RSS, regular news, IM, and telephony. Google is the most likely company to achieve this.

Interestingly, I have been forced into schizophrenia. My work information feeds are in one place, my personal feeds in another. At the moment it appears that the personal feeds on Google are more integrated, have better search abilities, etc., although far less storage.
Will this keep up? Or will the quality of information management at work play leapfrog with Google? I do not know... but I predict that at least some fraction of companies will just outsource their employees email, etc., to Google. I.e. I predict that Google will be able to provide a single stop for both work and personal information management. And that because of this, it will have a larger critical mass than companies that are stuck just supporting an individual's work computing and information needs.

Returning to trend #1: not only will less and less personal computing be done at work, but more and more work computing will be done personally... because the personal computing environment, whether Google based or whaever, is pulling ahead of the work environment. (Unless Google takes over the work environment, as predicted above.)
The item that sparked this post is just an example: reading USEnet newsgroups such as comp.arch is recreation, but I also fairly regularly post queries to newsgroups such as comp.lang.c, etc., for work related questions. Closing down the company's internal news service, of course, means that I will be now doing this using my personal computing resources. I.e. Google groups.
More evidence of this trend: back in the old days companies paid for 2nd phone lines for computer access. Nowadays you are expected to pay for your own broadband access, and to use it for after hours work. I keep meaning to take an income deduction for my broadband for tax purposes, since the logs plainly show that it is mostly used for work, not pleasure.

Summing up:
Prompted by: my company abandoning internal new servers.
Hypothesis: there is a trend away from PC based computing and information services, towards centralized computer services like Google.

The computer industry battle is not _I_n_t_e_l_ versus AMD, or Microsoft versus Linux. It is the PC versus Google.

(Here, I use "Google" as representative of web based computing services, ubiquitously accessible so long as you have Internet access.)

Saturday, September 02, 2006

Lifestyle computers

Labor Day weekend, sitting on the beach - I'm not surfing because I am watching my daughter play in the sand. No book to read. It would be nice to be able to use my computer on the beach - but it would have to be a laptop with no airholes for cooling, no PC-Card slots. Sealed, no entry for sand or water. Plus a daylight visible screen.

Merging my diverged CVS trees

I hope that I can use Git's content based features to merge my diverged CVS trees.

Git, Renaming, etc

Git does not track file renames.

Linus's email, http://permalink.gmane.org/gmane.comp.version-control.git/217, is essentially correct, but shortsighted. Linus argues that file renames are just a special case of moving content around. E.g. how does file rename tracking help you if you take a source file foo.h, move all of its content to bar.h, and then put #include "bar.h" inside foo.h? Linus argues that a content based tool that can detect this sort of movement, is more useful than tracking renames in Git.

Linus is right. But, Linus is shortsighted. Linus is considering only one usage model: that of a programmer studying code history. There is a human in the loop.

I want to use a version control system such as Git to perform such actions when a human is not in the loop.

E.g. I want to use Git, instead of RCS, as the version control system in twiki. Then, if a user requests http://mytwikisite.net/OldTopic, I can tell him or her that OldTopic has moved to NewTopic. To do this, the content tracking system must be able to make a single suggestion.

Of course, a content tracking system can make a single suggestion. But, it may get confused - e.g. two different wiki pages may start off from a common answer. Some human may have to decide who is the most logical primary choice. If that choice has been made, why not record it? Which amounts to recording the "primary" renaming.

Similarly, even a wiki may want to have a content tracking system that has a human in the loop. But it is nice to have a fully automated system with no human in the loop, that takes advantage of recorded rename tracking.

And, I hazard, even an SCM will need the fully automated version.

I.e.:

Rename tracking is a convenience.

Content tracking, with or without a human in the loop, is of additional value.

Content based filesystems and VC

It's time that I switched to Git for version control of my home directory - which has CVS'ed for the last, what, 10 years, maybe longer - and which has also been RCS'ed, SCCS'ed, and, briefly, BK'ed.

A few months ago Linus told me that Git was ready for folks like me to use it. At the same time he also discussed the following issue:

Git is, essentially, a content based filesystem or object store. It uses the SHA-1 cryptographic hash of a file as that file's name.

When last Linus and I talked (which was also the first and only time we have ever talked in person, at a restaurant - we've only briefly talked via email and newsgroups otherwise) Linus assured me that Git could handle the case where two objects had the same hash. I haven't seen how it is handled in the code. The Git discussion groups seem to be populated by people who cannot imagine that a collision would ever occur. I'm willing to believe that Linus has handled it properly. I'm going to use this blog to discuss what properly is.

First off, the basics: I'm with Val Henson, http://infohost.nmt.edu/~val/review/hash.pdf: using compare-by-hash without being prepared to handle collisions is bad, Bad, BAD. Given a large enough collection of files, a hash collision will occur. Worse if one can be constructed by a malicious user seeking to corrupt a repository.

But I really like the convenience and efficiency of compare-by-hash --- I call it "content based" addressing. So I want a content based scheme that handles collisions.

I can't understand why everyone doesn't do it: it's pretty simple.

E.g. in the object store, you create an index that maps hashes to object names. The object names could still be the cryptographic hash, perhaps with an instance number appended: i.e. the first object in the repository that has hash F00 would be named F00-0, the next F00-1, etc.
If you are so confident that there will never be collisions, you will only see F00-0 names, right? What does it harm you to append the instance number?

Or, you could use some other scheme for creating a unique name for the repository, such as a sequece number. In fact, I would probably suggest using both such designation schemes: sequence numbers inside the repository are more likely to need to be changed when two different repositories are merged. Hashes are more likely to be unique - although, if you merge two repositories that both have F00-0 objects, which are different, one is going to be named F00-1 after the merge. But, again, for those who are confident that collisions will never occur, you will never see this _slight_ renaming.

The only way in which this deviates from compare-by-hash content based systems such as Monotone is that, when you are adding an object to the repository, you don't just stop when you have computed the hash, and have seen that the hash is already in the repository. You have to compare the new object to the object that is already in the database that has the same hash. I.e. you use the hash to determine who to compare to, but you still do a full comparison.

Note: to compute the hash of the new object, you have to scan all of its bytes. That's O(n) work. To do the full comparison is still O(n) work - albeit with a bigger constant multiplier, circa 3X (one scan for the hash, then a scan of both the new and old object for the compare).

This suggests that you may not want to use a cryptographic hash of the entire file's n bytes. You may want a sampled hash. E.g. a hash of the first kilobyte, or a random selection of bytes. The purpose of the hash is now just to reduce the probability of collision, not eliminate it.
(Of course, the real issue is that no hash ever eliminates the possibility of collision:
people just fool themselves that it does.)

Another trivial optimization is to use the object size as well as its hash. If two objects that have the same hash differ in size, you can avoid the full comparison. If you are one of those people who believe that SHA-1 hash collisions can never occur, then certainly they will never occur for objects of the same size. So, by your own standard of belief, the full file comparison will never be done.

So, let's compare what I propose - using the hash as a hint, but handling collisions - to what Monotone did, using hash as a name, not providing for collisions. Let's call them collision-handling-content-based, and collision-neglecting-content-based:

If you believe that hash collisions will never occur:

  • when naming objects in the repository, my collision-handling and Monotine's collision ignoing approach will work the same.
  • when adding objects to the repository, my approach *might* require a full comparison - but the simple expediet of including the object size in the hash comparison avoids this -- in all of the cases where no hash collision occurs.


I.e. my collision handling approach and Monotone's collision ignoring approach are equivalent, so long as no collisions occur. And, of course, if collisions occur, Monotone breaks, but my approach still works.

The only reason that I can possibly imagine for not using my approach is code complexity. Why add code that will never be exercised? How do you test it? When I started writing rcs-file-merge -- until I learned that Linus was writing Git - I planned to use the content-based approach. But, I deliberately used a really weak hash - UNIX checksum - to deliberately create collisions, so that I could test that I could handle them. Found collisions in my home directory. Of course, a stronger hash would be used in actual production.

I really, really, hope that Git handles hash collisions. As I describe above, it is very straightforward, and costs nothing in performance.

Wednesday, August 23, 2006

Outlook woes / ISO a single reading interface

Last week I switched back to Outllook email. It only took 4 days until Outlook corrupted an OST file, and hung sending mail. My hands and shoulders and wrists hurt, because of Outlook's lousy human interface design.

Also, I was reading USEnet news and email together in EMACS Gnus. Now, with Outlook, my USEnet newsgroup reading rate has fallen off. I am too lazy to use multiple interfaces.

Only saving grace is that I have moved personal stuff to Google Mail and Calendar.

Unfortunately, now I have more different interfaces - Google Mail, Outlook, USEnet.

Tuesday, August 22, 2006

I hate Microsoft Outlook (Calendar)!!!

Damn! For the umpteenth time I missed a meeting. Microsoft's Outlook Calendar did not notify me... Ironically, I was posting about Google Calendar at around the time.

Outlook Calendar might not be so bad if I could access it from my cell phone, or if it could text message me. But IT has decided that would be insecure.

Outlook Calendar might not be so bad if it delivered alarms on time. But I regularly receive alarms 10 minutes, an hour, sometimes days late. I begin to suspect that the alarm genrator can be locked out by other Outlook tools, such as spam filters.

Now that I am using Google Mail and Google Calendar, I can very easily imagine becoming Google centric. It is a hassle to have to deal with two email systems and two calendar systems, one for work and one for my personal life.

Apart from security, the only real advantage of Microsoft Outlook that I see is disconnectability: I have to be connected to the Internet to use Google Mail and Calendar, whereas I can use Microsoft Outlook on a plane. I hope that Google will remedy this.

I have begun to think about a generic tool for merging data from disconnected sessions. XML greatly facilitates this, since all data representations can be converted into XML. At the very least, one could replicate and version the diverged XML ... clauses. This is better than versioning at a file or document level. Specific merge tools can also be created.

Version control software is the fundamental tool for replicated and disconnectable, distributed, databases.

This is actually relevant to my job: if Google software supplants Microsoft, then there is much less need for heavyweight PCs or laptops. Thin clients may be encouraged. Although it is not clear whether a fancy Ajax application may not require a powerful PC.

Hierarchies for Google Calendar

This topic is what inspired me to get a Google Blogger site:

I just set myself and my wife up with Google Calendar.

We each have our own calendars, plus one for our daughter. I have also set up a Family calendar. Plus, I have set up "Possibilities" calendars, for things that I am interested in enough to want to be able to see on my calendar, but which I am not committed to going to.

And I have also subscribed to various public calendars, such as tides, events, etc.

After all this I end up having something like 20 calendars connected to my Google Calendar.

Some would say this is overkill. I take the viewpoint that it is reasonable to this, if Google Calendar had tools for managing such long lists of meta-calendars. Unfortunately, this is lacking.

Basic tools such as being able to select or deselect all calendars would be nice.

Going further, I would like to be able to group my calendar subscriptions in various ways. Hierarchically, but possibly overlapping.

E.g. I would like a button to be able to link my personal and family "Possibilities" calendars - so that I can easily see just commitments.

E.g. I want to be able to enable/disable viewing of all of my public calendar subscriptions.

E.g. I would like to group my public calendar subscriptions by topics, such as music, recreation, kid's activities.

In particular, I find it most annoying to see lots of events that I am not interested in in some of the local event public calendars. I can copy just the events that I am interested in to one of my calendars, but that doesn't help me the next time I want to look at new activites on this public calendar. I think that I want to create "subtractive" calendars - a personal calendar that is lnked to a public calendar, but where I have explicitly removed events that I am not interested in.

Similarly, I would like to be able to annotate events in a public calendar - e.g. with car pool arrangemens for a concert. Once again, making a local copy is a start - but onced that local copy is edited, the link to the original event, which may be rescheduled, may be lost (or at least confused).

Blog Topics

What can I blog about?

Unfortunately, I cannot blog about my work. Stuff that I am working on is company proprietary.

It is hard to blog about computer architecture given the above. Although I can discuss this in public, I usually tend to do so in the USEnet newsgroup comp.arch. Unfortunately, that is more transient that this blogsite.

I'd love to have all of my USEnet posts automatically saved to this blogsite...

As you can prbably see, collaboration tools and software are an interest only slightly below computer architecture. I guess that I am already blogging about this.

Wiki as a Bookmark Sharing System

See http://twiki.org/cgi-bin/view/Support/BookmarkSharingWithTWiki.
I am enjoying this quite a lot - I want to share it with others.

By now it may be renamed to http://twiki.org/cgi-bin/view/Sandbox/BookmarkSharingWithTWiki

Moral: twiki should use permanent URLs.

Backing up Google Blogger Content

If I understand Google Blogger's erms of Service, I own the content I post.
Am I allowed to use an automated tool to make a backup of such content on disks I own?

I am still not sure that I trust Google to never go away, or to not ever seek to privatize conent posted on their sites and stored on their disks.

Hmm...

Giving in to Google Blogger / wishing for Wiki

I suppose that I am giving in by creating a Google Blogger page. Giving in in two different ways:

(1) I would really prefer to have a publicly visible wiki site, rather than a blog.

(2) I really should be running my own public website, with both wiki and blog, rather than piggybacking on Google.

I've been running websites and wikisites inside the companies that I work for for a long time, on company intranets. I keep meaning to set up my own public website on the Internet. I have, from time-to-time exposed some of my servers (personaly servers, that is, not company servers). But I am lazy, and paranoid about security. Perhaps I should be using a web hosting service, one that supports wiki software such as twiki; but I am reluctant to spend money on this sort of vanity publishing.

As we say in Portand: "Free is a very good price". Hence Google.

I prefer wikis to blogs. Or, rather, wikis is a superset of blogs. I've written my own blogs using fairly vanilla twiki, mildly customized, for a while. I'm aware of quite fancy blog sites and systems built on top of wiki.

I've thought long about the differences between blogs and wikis. I've posted about this on my intranet wiki/blog; I may move some if that here, but not now. Just briefly, the main difference seems to be:
* wikis are oriented towards editing, revision, and eventually creating reference material
* blogs are transient - an article is typically written at a single point in time
* although blog articles may be revised, that is not the usual situation
* whereas revision at any time is the essence of wiki
* blogs are typically oriented around a primay author
* others may comment on a blog page, but such commens are typically distinct from the main posting
* wikis are oriented around collaboration
* although multiple users may comment on a wiki page, in "discussion mode"
* it is often the intention and hope of a wiki site that such comments may be folded into a consensus rewrite of the wiki page (although the separate commens should certainly be retained in revision history, and perhaps in a separate "flow of consciousness" wiki page)

This makes it sound like there should be no such thing as a single user wiki. That is wrong: I find a single user wiki (and, for that matter, a single user blog) very useful. Helps organize my thoughts. Plus, I hold sufficiently diverse opinions inside my head that I can frequently have a conversation with myself on a private wiki page or blog page.

Single user wikis are quite useful on devices such as PDA phones, as means of organizing information.