Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Imagination Technologies's MIPS group, in the past of other companies such as Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Monday, September 26, 2016

MODULES FOR XUNIT TESTING IN PERL

Test::Class : "MODULES FOR XUNIT TESTING IN PERL"



'via Blog this'



Not a bad summary:



Test::Unit is a port of JUnit into Perl.  Familiar to xUnit users.



Test::Class

Much like xUnit.  xUnit inspired. Unfamiliar names muck up porting, but can live with that.

Plays well with traditional Perl test tools like TAP and Test::Builder.

Somewhat object oriented.  But

 Test::Class does not provide its own test functions, but uses those provided by Test::More and friends 
which are free functions, so somewhat annoying to extend (e.g. to report error location accurately, when you build meta-tests that call multiple asserts internally). But you can access the underlying Test::Builder routines.

Uses :Test attribute so that introspection can find test functions to rub, including setup/teardown.

Pleasant - many folks have had to add such "keep running" behaviour to xUnit.

Unlike JUnit the test functions supplied by Test::More et al do not throw exceptions on failure. They just report the failure to STDOUT where it is collected by Test::Harness. This means that where you have
sub foo : Test(2) {
      ok($foo->method1);
      ok($foo->method2);
      ok($foo->method3) or die "method3 test failure";
      ok($foo->method4);
  }

The second test will run if the first one fails But the third will stop the fourth from running.
Test::Unit

Test::Unit is a port of JUnit http://www.junit.org/ into perl. If you have used JUnit then the Test::Unit framework should be very familiar.

It is class based so you can easily reuse your test classes and extend by subclassing. You get a nice flexible framework you can tweak to your heart's content. If you can run Tk you also get a graphical test runner. However, Test::Unit is not based on Test::Builder. You cannot easily move Test::Builder based test functions into Test::Unit based classes. You have to learn another test assertion API.

Test::Unit implements it's own testing framework separate from Test::Harness. You can retrofit *.t scripts as unit tests, and output test results in the format that Test::Harness expects, but things like todo tests and skipping tests are not supported.


But... the Test::Case author does not say that Test::Unit is mostly abandoned as odf 2016, possibly since 2011 or before.  

Test::SimpleUnit

A very simple unit testing framework. If you are looking for a lightweight single module solution this might be for you. The advantage of Test::SimpleUnit is that it is simple! Just one module with a smallish API to learn. Of course this is also the disadvantage.

It's not class based so you cannot create testing classes to reuse and extend. It doesn't use Test::Builder so it's difficult to extend or integrate with other testing modules. If you are already familiar with Test::BuilderTest::More and friends you will have to learn a new test assertion API. It does not support todo tests.

MODULES FOR XUNIT TESTING IN PERL

Test::Class : "MODULES FOR XUNIT TESTING IN PERL"



'via Blog this'



Not a bad summary:



Test::Unit is a port of JUnit into Perl.  Familiar to xUnit users.



Test::Class

Much like xUnit.  xUnit inspired. Unfamiliar names muck up porting, but can live with that.

Plays well with traditional Perl test tools like TAP and Test::Builder.

Somewhat object oriented.  But

 Test::Class does not provide its own test functions, but uses those provided by Test::More and friends 
which are free functions, so somewhat annoying to extend (e.g. to report error location accurately, when you build meta-tests that call multiple asserts internally). But you can access the underlying Test::Builder routines.

Uses :Test attribute so that introspection can find test functions to rub, including setup/teardown.

Pleasant - many folks have had to add such "keep running" behaviour to xUnit.

Unlike JUnit the test functions supplied by Test::More et al do not throw exceptions on failure. They just report the failure to STDOUT where it is collected by Test::Harness. This means that where you have
sub foo : Test(2) {
      ok($foo->method1);
      ok($foo->method2);
      ok($foo->method3) or die "method3 test failure";
      ok($foo->method4);
  }

The second test will run if the first one fails But the third will stop the fourth from running.
Test::Unit

Test::Unit is a port of JUnit http://www.junit.org/ into perl. If you have used JUnit then the Test::Unit framework should be very familiar.

It is class based so you can easily reuse your test classes and extend by subclassing. You get a nice flexible framework you can tweak to your heart's content. If you can run Tk you also get a graphical test runner. However, Test::Unit is not based on Test::Builder. You cannot easily move Test::Builder based test functions into Test::Unit based classes. You have to learn another test assertion API.

Test::Unit implements it's own testing framework separate from Test::Harness. You can retrofit *.t scripts as unit tests, and output test results in the format that Test::Harness expects, but things like todo tests and skipping tests are not supported.


But... the Test::Case author does not say that Test::Unit is mostly abandoned as odf 2016, possibly since 2011 or before.  

Test::SimpleUnit

A very simple unit testing framework. If you are looking for a lightweight single module solution this might be for you. The advantage of Test::SimpleUnit is that it is simple! Just one module with a smallish API to learn. Of course this is also the disadvantage.

It's not class based so you cannot create testing classes to reuse and extend. It doesn't use Test::Builder so it's difficult to extend or integrate with other testing modules. If you are already familiar with Test::BuilderTest::More and friends you will have to learn a new test assertion API. It does not support todo tests.

Wednesday, September 14, 2016

Are there any good uses for multiple Perl fat commas in series ( a => b => 1 )? - Stack Overflow

Are there any good uses for multiple Perl fat commas in series ( a => b => 1 )? - Stack Overflow: "Are there any good uses for multiple Perl fat commas in series ( a => b => 1 )?"



'via Blog this'



Making a copy of my own post.

It sucks that can't cut and paste pseudo-formatted text between sites like StackOverflow and Blogger. Wasn't HTML supposed to solve that?  Oh, yeah, scripting attacks.  Won't bother to fix the formatting. (Started by hand, but must stop wasting time. So much time is wasted fixing the formatting when copying between tools.)


**---+ BRIEF**



In addition to notation for graphs and paths (like Travelling Salesman, or critical path), multiple serial fat arrow/commas can be nice syntactic sugar for functions that you might call like



    # Writing: creating $node->{a}->{b}->{c} if it does not already exist

    assign_to_path($node=>a=>b=>c=>"value");

 

    # Reading

    my $cvalue = follow_path($node=>a=>b=>c=>"default value);



the latter being similar to



    my $cvalue = ($node->{a}->{b}->{c})//"default value);



although you can do more stuff in a pointer chasing / hashref path following function than you can with //



It turned out that I already had such functions in my personal library, but I did not know that you could use `a=>b=>"value"` with them to make them look less ugly where used.



**---+ DETAIL**



I usually try not to answer my own questions on this forum, encouraging others to - but in this case, in addition to the contrived example I posted inside and shortly after the original question, I have since realized what I think is a completely legitimate use for multiple fat arrow/commas in series.



I would not complain if multiple fat arrows in series were disallowed, since they are quite often a real bug, but there are at least two places where they are appropriate.



**(1) Entering Graphs as Chains**



Reminder:  my first, totally contrived, use case for multiple fat pointer/commas in series was to make it easier to enter certain graphs by using "chains".  E.g. a classic deadlock graph would be, in pairs `{ 1=>2, 2=>1 }`, and as a "chain" `(1=>2=>1)`.  If you want to show a graph that is one big cycle with a "chord" or shortcut, it might look like `([1=>2=>3=>4=>5=>6=>1],[3=>6])`.



Note that I used node numbers: if I wanted to use node names, I might have to do `(a=>b=>c=>undef)` to avoid having to quote the last node in a cycle `(a=>b=>"c")`. This is because of the implicit quote on the left hand but not the right hand argument. Since you have to but up with undef to support node names anyway, one might just "flatten" `([1=>2=>3=>4=>5=>6=>1],[3=>6])` to `([1=>2=>3=>4=>5=>6=>1=>undef,3=>6=>undef)`. In the former end of chain is indicated by end of array `[...]`.  In the latter, by undef.  Using undef makes all of the nodes at the left hand of a =>, so syntactically uniform.



I admit that tis is contrived - it was just the first thing that came to mind.



**(2) Paths as a data type**



Slightly less contrived: imagine that you are writing, using, or testing code that is seeking "paths" through a graph - e.g. Hamiltonians, Traveling Salesman, mapping, electronic circuit speed path analysis.  For that matter, any critical path analysis, or data flow analysis.



I have worked in 4 of the 6 areas I just listed.  Although I have never used Perl fat arrow/commas in such code (usually Perl is to slow for such code when I have been working on such tasks), I can certainly avow that, although it is GOOD ENOUGH to write (a,b,c,d,e) in a computer program, in my own notes I usually draw arrows (a->b->c->d->e).  I think that it would be quite pleasant to be able to code it as `(a=>b=>c=>d=>e=>undef)`, even with the ugly undefs.    `(a=>b=>c=>d=>e=>undef)` is preferable to  `qw(a b c d e)`, if I were trying to make the code resemble my thinking.



"Trying to make the code resemble my thinking" is often what I am doing.  I want to use the notations common to the problem area.  Sometimes I will use a DSL, sometimes write my own, sometimes just write some string or text parsing routines  But if a language like Perl has a syntax that looks almost familiar, that's less code to write.



By the way, in C++ I often express chains or paths as



    Path p = Path()->start("a")->link_to("b")->link_to("c")->end("d");



This is unfortunately verbose, but it is almost self-explanatory.



Of course, such notations are just the programmer API: the actual data strcture is usually well hidden, and is seldom the linear linked list that the above implies.



Anyway - if I need to write such "path-manipulating" code in Perl, I may use `(a=>b=>c=>undef)` as a notation --  particularly when passed to a constructor like Path(a=>b=>c=>undef) which creates the actual data structure.



There might even be some slightly more pleasant ways of dealing with the non-quoting of the fit arrow/comma's right hand side:   eg. sometimes I might use a code like 0 or -1 to indicate closed loops (cycles) or paths that are not yet complete: `Path(a=>b=>c=>0)` is a cycle, `Path(a=>b=>c=>-1)` is not. 0 rather looks like a closed loop.  It is unfortunate that this would mean that you could not have numeric nodes.   Or one might leverage more Perl syntax:   `Path(a=>b=>c=>undef), Path(a=>b=>c=>[]), Path(a=>b=>c=>{})`.



All we are doing here is using the syntax of the programming language to create notations that resemble the notation of the problem domain.





**(3) Finally, a use case that is more "native Perl"-ish.**



Have you ever wanted to access   `$node->{a}->{b}->{c}`, when it is not guaranteed that all of the elements of the path exist?



Sometimes one ends up writing code like



When writing:



    $node = {} if not defined $node;

    $node->{a} = {}  if not exists $node->{a};

    $node->{a}->{b} = {}  if not exists $node->{a}->{b};

    $node->{a}->{b}->{c} = 0;



When reading ... well, you can imagine. Before the introduction of the // operator, I would have been too lazy to enter it. With the // operator, such code might look like:



    my $value = $node->{a}->{b}->{c}//"default value if the path is incomplete";



Yeah, yeah...  one should never expose that much detail of the datastructure.  Before writing code like the above, one should refactor to a nice set of object oriented APIs.   Etc.



Nevertheless, when you have to deal with somebody else's Perl code, you may run into the above.  Especially if that somebody else was an EE in a hurry, not a CS major.



Anyway: I have long had in my personal Perl library functions that encapsulate the above.



Historically, these have looked like:



    assign_to_hash_path( $node, "a", "b", "c", 0 )

    # sets $node->{a}->{b}->{c} = 0, creating all nodes as necessary

    # can follow or create arbitrarily log chains

    # the first argument is the base node,

    # the last is the value

    # any number of intermediate nodes are allowed.



or, more obviously an assignment:



    ${hash_path_lhs( $node, "a", "b", "c")} = 0

    # IIRC this is how I created a left-hand-side

    # by returning a ref that I then dereffed.



and for reading (now usually // for simple cases):



    my $cvalue = follow_hash_path_undef_if_cannot( $node, "a", "b", "c" );



Since the simple case of reading is now usually //, it is worth mentioning less simple cases, e.g. in a simulator where you are creating (create, zero-fill, or copy-on-read), or possibly tracking stats or modifying state like LRU or history



    my $cvalue = lookup( $bpred_top => path_history => $path_hash => undef );  

    my $cvalue = lookup( $bpred_top => gshare => hash($pc,$tnt_history) => undef );  



Basically, these libraries are the // operator on steroids, with a wider selection of what to do is the full path does not exist (or even if it does exist, e.g. count stats and cache).





They are slightly more pleasant using the quote operators, e.g.



    assign_to_hash_path( $node, qw{a b c}, 0);

    ${hash_path_lhs( $node, qw{a b c})} = 0;

    my $cvalue = follow_hash_path_undef_if_cannot( $node, qw{a b c});



But now that it has sunk into my thick head after many years of using perlobj, I think that fat arrow/commas may make these look much more pleasant:



    assign_to_hash_path( $node => a => b => c => 0);

    my $cvalue = follow_hash_path( $node => a => b => c => undef );



Unfortunately, the LHS function doesn't improve much because of the need to quote the last element of such a path:



    ${hash_path_lhs( $node=>a=>b=>"c"} = 0;

    ${hash_path_lhs( $node=>a=>b=>c=>undef} = 0;



so I would be tempted to give up on LHS, or use some mandatory final argument, like



    ${hash_path_lhs( $node=>a=>b=>c, Create_As_Needed() ) = 0;

    ${hash_path_lhs( $node=>a=>b=>c, Die_if_Path_Incomplete() ) = 0;



The LHS code looks ugly, but the other two look pretty good, expecting that the final element of such a chain would either be the value to be assigned, or the default value.



    assign_to_hash_path( $node => a => b => c => "value-to-be-assigned");

    my $cvalue = follow_hash_path( $node => a => b => c => "default-value" );



Unfortunately, there is no obvious place to hand keyword options - the following does not work because you cannot distinguish optional keywords from args, at either beginning or end:



    assign_to_hash_path( $node => a => b => c => 0);

    assign_to_hash_path( {warn_if_path_incomplete=>1}, $node => a => b => c => 0);

    my $cvalue = follow_hash_path( $node => a => b => c => undef );

    my $cvalue = follow_hash_path( $node => a => b => c => undef, {die_if_path_incomplete=>1} );



I have occasionally used a Keyword class, abbreviated KW, so that a type inquiry can tell us which is the keyword, but that is suboptimal - actually, it's not bad, but it is just that Perl has no single BKM (yeah, TMTOWTDI):



    assign_to_hash_path( $node => a => b => c => 0);

    assign_to_hash_path( KW(warn_if_path_incomplete=>1), $node => a => b => c => 0);

    my $cvalue = follow_hash_path( $node => a => b => c => undef );

    my $cvalue = follow_hash_path( KW(die_if_path_incomplete=>1), $node => a => b => c => undef );

    my $value = follow_hash_path( $node => a => b => c => undef, KW(die_if_path_incomplete=>1) );



**Conclusion: Foo(a=>b=>c=>1) seems strange, but might be useful/nice syntactic sugar**



So: while I do rather wish that `use warnings` had warned me about `foo(a=>a=>1)`, when a keyword was duplicated by accident, I think that multiple fat arrow/commas in series might be useful in making some types of code more readable.



Although I haven't seen any real-world examples of this, usually if I can imagine something, a better and more perspicacious Perl programmer has already written it.



And I am considering reworking some of my legacy libraries to use it. In fact, I may not have to rework - the library that I designed to be called as



    assign_to_hash_path( $node, "a", "b", "c", 0 )



may already work if invoked as



    assign_to_hash_path( $node => a => b=> c => 0 )



**Simple Working Example**



For grins, an example of a simple path following function, that does a bit more error reporting than is convenient to do with //



    $ bash 1278 $>  cat example-Follow_Hashref_Path.pl

    use strict;

    use warnings;

 

    sub follow_path {

        my $node=shift;

        if( ref $node ne 'HASH' ) {

    print "Error: expected \$node to be a ref HASH,"

     ." instead got ".(

         ref $node eq ''

    ?"scalar $node"

    :"ref ".(ref $node))

     ."\n";

    return;

        }

        my $path=q{node=>};

        my $full_path = $path . join('=>',@_);

        foreach my $field ( @_ ) {

    $path.="->{$field}";

    if( not exists $node->{$field} ) {

       print "stopped at path element $field"

         ."\n    full_path = $full_path"

         ."\n    path so far = $path"

         ."\n";

       return;

    }

    $node = $node->{$field}

        }

    }

 

    my $node={a=>{b=>{c=>{}}}};

 

    follow_path($node=>a=>b=>c=>"end");

    follow_path($node=>A=>b=>c=>"end");

    follow_path($node=>a=>B=>c=>"end");

    follow_path($node=>a=>b=>C=>"end");

    follow_path({}=>a=>b=>c=>"end");

    follow_path(undef=>a=>b=>c=>"end");

    follow_path('string-value'=>a=>b=>c=>"end");

    follow_path('42'=>a=>b=>c=>"end");

    follow_path([]=>a=>b=>c=>"end");

and use:

    $ perl example-Follow_Hashref_Path.pl
    stopped at path element end
        full_path = node=>a=>b=>c=>end
        path so far = node=>->{a}->{b}->{c}->{end}
    stopped at path element A
        full_path = node=>A=>b=>c=>end
        path so far = node=>->{A}
    stopped at path element B
        full_path = node=>a=>B=>c=>end
        path so far = node=>->{a}->{B}
    stopped at path element C
        full_path = node=>a=>b=>C=>end
        path so far = node=>->{a}->{b}->{C}
    stopped at path element a
        full_path = node=>a=>b=>c=>end
        path so far = node=>->{a}
    Error: expected $node to be a ref HASH, instead got scalar undef
    Error: expected $node to be a ref HASH, instead got scalar string-value
    Error: expected $node to be a ref HASH, instead got scalar 42
    Error: expected $node to be a ref HASH, instead got ref ARRAY
    ✓
    $

**Another Example `($node->{a}->{B}->{c}//"premature end")`**


    $ bash 1291 $>  perl -e 'use warnings;my $node={a=>{b=>{c=>"end"}}}; print "followed path to the ".($node->{a}->{B}->{c}//"premature end")."\n"'

    followed path to the premature end

    $ bash 1292 $>  perl -e 'use warnings;my $node={a=>{b=>{c=>"end"}}}; print "followed path to the ".($node->{a}->{b}->{c}//"premature end")."\n"'

    followed path to the end

I admit that I have trouble keeping the binding strength of // in my head.

**Finally**


By the way, if anyone has examples of idioms using `//` and `->` that avoid the need to create library functions, especially for writes, I'd love to hear of them.

It's good to be able to create libraries to make stuff easier or more pleasant.

It is also good not to need to do so - as in `($node->{a}->{B}->{c}//"default")`.

Later:

At Stack Overflow: @mp3:pointed out that fat arrow/comma can be a terminator, e.g. (a=>b=>c=>). Doesn't help much in general when you have multiple chains, or to separate keywords in follow(path$node=>a=>b=>c=>"default",want_keyword=>1), but looks not-so-bad for Path(a=>b=>c=>). 

Inspires EVIL PERL TRICK print Do => You =>Like=>Barewords=>

May not want to be associated with such evil. 

I have often thought that the reason that we don't actually use Perl as our interactive shell like bash is that bash defaults to barewords, whereas Perl usually requires quotes.

Methinks that it should be possible to create a single language that with the same keywords and operators, that can be turned "inside out":

One mode where strings require quotes:

var a ="string-value"

a second mode where things are string by default, and it is the keywords and syntax that needs to be quoted (here by {}):

{var} {a} {=} string-value

The latter might be useful in literate programming. Same programing language constructs, just inverted.  Although the embedded prtogramming language syntax might be most l;ike Perl interpolation - might need different quotes for code producing a value within the text, and code operating on the text./

The minimal aspect of command line shells, for the most part, is a hybrid: the first word on a line is special, a command - everything else is strings by default.

Sunday, September 11, 2016

IDEA: Enhanced Timeouts to Ramp Down and Ramp Up Treadmill (watchapp)


This is not a complaint!  I am the happy owner and user of a TR1200-DT3.  

I just want to make a suggestion that you might consider for an enhancement.

CC'ing workwhilewalking.com since they are the reason I both the LifeSpan treadmill and one of their electric desks - even though you don't allow them to sell over web.

From the TR1200-DT3 Under Desk Treadmill Manual
Intelli-Guard™ Walk confidently knowing your safety is assured with Intelli-Guard™. Step away from your treadmill for more than twenty seconds and your treadmill’s belt will automatically glide to a smooth stop. 
Intelli-Step™ Never miss a stride with Intelli-Step™. Your steps are automatically calculated with meticulous precision, displaying immediate feedback and historical trends via your Club account. (OK, a minor complaint: the LifeSpan Club and apps are pretty useless. But that's okay, wearing a FitBit on my shoe works.) 
Problem:

Intelli-guard stops 20 seconds after I step off treadmill, if walking faster than 1mph.

Well, up until now this was never a problem - but all of a sudden I have started being able to walk 1mph and faster, while working on my PC.  E.g. while writing this email. (And I am regularly getting 30-40K steps a day, counted by FitBit, and feeling great!)

The problem:  I often step away for a bit.  E.g. when doorbell rings, or for a biobreak, or to get a cup of tea (typically while a slow build is going on)

When I get back, Intelli-guard has stopped the treadmill.   Whereas in past, when walking slowly, it would stay running, and I would just hope back on and resume walking while working.

That's not a problem so much that I often forget to start the treadmill up again, thinking "oh, I will just make a quick change and rebuild".

... eventually I realize that I have been standing without using the treadmill for 2 hours.  Knees and back aching, etc.  (Walking on treadmill is much less stressful than standing - and better exercise as well.)

I have tried disabling the Intelli-guard timeout (that's hat I am doing now), and/also resuming at last speed.  Neither is satisfactory - especially not when I realize that I left the treadmill running overnight.

Suggestion:

1) Preferences for timing out the treadmill.   E.g. not 20 seconds, but configurable, up to an hour or so.

2) Two timeouts:

2.1) Timeout #1 ramps the treadmill down from speed in use to some slow default, like 0.4mph.   

2.2) Timeout #2 stops treadmill after an hour or so, to avoid wasting power.

These two timeouts should be easy.

For extra credit:

3) Recognize when user is back, and ramp back up.

3.1) e.g. if in the "slow mode", you can recognize that the user is back by noticing the steps.

3.2) extra credit, a sensor - visual, or bluetooth with phone or watch, or ...

Ramping up might be a safety challenge.  Best if you can query user, and ask "do you want to ramp back up to speed".

Where to query?
  • existing primitive console
  • fancier console (you probably have)
  • phone app (oh, no, I hate phones, now that I use a ...)
  • watch
    • could be as simple as a text or other push 
      • "Do you want to ramp up treadmill?" 
      • that user can reply to (some watches can, e.g. Apple or my Pebble, apparently not a FitBit yet.
    • might be a no-reply notify / text
      • "I am about to ramp up. Press stop while ramping up to stop at that speed..."
    • or you can write a watchapp
      • but that gets you into the losing game of "which watch?" And you thought Android/iPhone was a pain.
    • IMHO watches are a great way to control stuff like this
      • more personal than a phone
      • much harder to set down
  • IMHO a text (or better, a secure messaging app, if there is a standard) is the way to go.  Gets you watch and phone. and also PC. With or without reply from device sent to.
BTW - although SMS texts are not secure - you would not want a hacker to be able to remote control a treadmill, that could be severe injury - you can send authenticated messages between LifeSpan treadmill and device.

BlueTooth, of course - probably BT/LE - is more secure.
Once again: I am a happy user of my treadmill desk. 
But I am an engineer, and always want to improve.  (Worse, a computer architect)
---+ LOW PRIORITY

FYI I have corresponded with you in past about
flakey boot (looks like power sequencing)
  •  y'all offerred to have me send in the console for a fix
  • but I never got around to it - too much hassle, and what I have works - don't want to give it up. 
  • and transferring LifeSpan data to somewhere, anywhere, 
  • like FitBit 
    • not useful - I already have a FitBit, sometimes 2 
      •  one on my feet that counts while on treadmill, since hands typing don;'t count (on shoes I keep by treadmill for use on treadmill, clean, not used outside)
      • wristband, that counts when not using treadmill
      • FitBit can handle multiple FitBit stepcounters, but not non-FitBit stepcounters
  • or Apple Health 
    • again doesn't understand multiple stepcounters
  • or my own spreadsheet
    • I can certainly handle / record multiple stepcounters
    • I can even reconcile, so long as I have fine grain counts, e.g. per minute
      • even if clocks are out of synch (time warp)
'via Blog this'

Saturday, September 10, 2016

Perforce: "Git ignore syntax is fully supported" - NOT :-(

---+ BRIEF

Posting in the hope of saving somebody else the trouble of figuring out that Perforce does not support full wildcards in P4IGNORE, even though they say "Git ignore syntax is fully supported".

Mildly annoyed.

But also amused: to work around Perforce's limitations, I did
heck!: echo {a,b,c,d,....}{a,b,c,d,...} >> .p4ignore Had to shrink to fit tweet. wc .p4ignore now 1418 lines. Had to shrink to fit tweet.
Combinatoric explosion is so much fun.  Should I go for three characters ???

---+ DETAIL

I hate it when ... somebody, whether the help file, SW developer blog, or ads ... say that "something is fully supported", when it is not.

I was very excited to see that Perforce might support .gitignore-comparable wildcards.  I wasted any hour proving that, while P4V 2015.2 does support some .gitignore wildcards, it does not support all, specifically not the ? single character wildcard, or patterns such as ?? for 2 character filenames, etc.
@p4sven I won't bother you any more. Only did because you said "Git ignore syntax is fully supported". Wrong! ? wildcards are not 
P4IGNORE: Ignorance is Bliss | Perforce: "Git ignore syntax is fully supported"
'via Blog this'
It is okay to say "Perforce supports some or most .gitignore syntax, to the point where you can share .gitignore files with projects using git, and still use Perforce if you want to." - this is fine - honest,  and valuable even if not complete.

Even better if you can say what you do not support. So I don't waste time figuring out what isn't properly supported.

But to say that you have full support when you do not WASTES MY TIME.

Worse, git  is Open Source, it probably has test suites for .gitignore that Perforce could have looked at.  (Hmm, could that be a license violation? Don't use the code, just extract some test patterns...)

Worse comes to worst, some random file patterns could have been run on both p4 and git.

---

Why ? ??

I have a habit of creating files with single character filenames when doing emacs text wizardry to create lists of files to work on (ironically, to work around Perforce limitations).

E.g. filenames like "a", "b".  Also "a1", "a2".

Aside: Yeah, I have been bitten by "cc" and other two letter executables, back when I was doing OS releases for Gould and Motorola:
$ bash 1405 $>  ls /bin/??
/bin/cp* /bin/dd* /bin/df* /bin/ed* /bin/ln*
/bin/ls* /bin/mv* /bin/ps* /bin/rm* /bin/sh*
Why not "tmp*"?  Well, I use that too - but ? and ?? are shorter and easier to type.


---+ SEE ALSO

See also http://stackoverflow.com/questions/18240084/how-does-perforce-ignore-file-syntax-differ-from-gitignore-syntax




Thursday, September 08, 2016

Fat commas / fat arrows in series

I just got bitten by a bug caused by two fat commas / fat arrows in series:

$ bash $> perl -e 'use strict; use warnings; my @v = ( a=> b => 1 )'
✓ 
actually in a function; actually in a constructor for an object (blessed hash), so I was thinking {} when it was new( a=>b=>1).

$ bash $>  perl -e '
    use strict; use warnings;
    sub kwargs_func{ print "inside\n"; my %kw = $_[0] ;};
    kwargs_func( a=> b => 1 )
'
inside
Odd number of elements in hash assignment at -e line ##.
✓ 
Obviously I found the bug fairly quickly - but I would prefer to have had a compile-time error or warning rather than a run-time error.



Q: are there any good uses for fat commas in series?



I am surprised that there was not a 'use warnings' warning for this.



---



I like functions with keyword arguments.  In Perl there are two main ways to do this:

func_hash_as_array_arg( kwarg1=>kwval1, kwarg2=>kwval2 )
func_hashref_as_scalar_arg( { kwarg1=>kwval1, kwarg2=>kwval2 } )
which can be mixed with positional in a reasonably nice way
func( posarg1, posarg2, kwarg1=>kwval1, kwarg2=>kwval2 )
func( posarg1, posarg2, { kwarg1=>kwval1, kwarg2=>kwval2 } )
and also in less nice ways
func( { kwarg1=>kwval1, kwarg2=>kwval2 }, varargs1, vargags2, ... )
Although I prefer f(k1=>v1) to f({k1=>v1}) - less clutter - the fact that the hashref "keyword argument group" gives a slight bit more compile-time check is interesting. I may flip.



Of course, the real problem is that Perl needs a proper syntax for keyword arguments.



Perl6 does it better.



---



For grins, some related code examples with 2 fat commas in series.

$ bash $>  perl -e 'use strict; use warnings; my %v = ( a=> b => 1 )'
Odd number of elements in hash assignment at -e line 1.
$ bash $>  perl -e 'use strict; use warnings; my $e = { a=> b => 1 }'
Odd number of elements in anonymous hash at -e line 1.
✓ 
$ bash $>  perl -e 'use strict; use warnings; my $e = [ a=> b => 1 ]'
✓ 
$ bash $>  perl -e '
    use strict; use warnings;
    sub kwargs_func{ print "inside\n"; my %kw = $_[0] ;};
    kwargs_func( a=> b => 1 )
'
inside
Odd number of elements in hash assignment at -e line ##.
✓ 
$ bash $>  perl -e '
   use strict; use warnings;
   sub kwargs_func{ print "inside\n"; my %kw = %{$_[0]} ;};
   kwargs_func( {a=> b => 1} )
'
Odd number of elements in anonymous hash at -e line ##.
inside
✓ 
---

Not the same problem, but along the same lines: When a fat comma is confusing | Samuel Kaufman [blogs.perl.org]


'via Blog this'

Friday, August 19, 2016

Beyond Getopt: Doc + Test > CodeGen

For decades I have been automating generating code for boilerplate like command line parsing (getopt) and generating documentation for that code.  E.g. circa 1985 in my first job I wrote tools to generate manpages from what looks very much like Perl POD (Plain Old Documentation).



I admit to joining the "test-first" and "test-driven" bandwagon later. Grudgingly at first - Bob Colwell at Intel/P6 said "if it doesn't have an automated test, it doesn't exist".   With growing enthusiasm as I discovered Agile and refactoring circa 1996.



Yet for all of my love of automation (i.e. my laziness), I was never quite happy with getopt style functions, whether in C, C++, Perl, etc.  Often the standard libraries did not parse options the way I want:  E.g. typically I want options to be parsed strictly left to right, so that later options can override earlier options, so that you can create configurations and then customize. Except, e.g., when I want to disallow. E.g. often I want to have expression parsing as part of the options, so that you can say CPUSIM -cachesize=linesize*sets*ways -sets=4 ...  E.g. I want to be able to have the same or related options parsers for command line, environment variables, and config files.   E.g. I want modular, composable, options parsing - at Intel we called this "knobs" parsing, to the merriment of pommies - so that I can plug a cache model into the L2 position, and instantly get all of the appropriate options for that new L2 cache.  And so on.



Plus, of course, I always want to automate the creation of docs and help messages, so that they are guaranteed to be kept in sync wth the parsing code.



Many getopts libraries only automate the code generation.



Some provide help and documentation - usually perfunctory.   E.g. when you create the option, you may also provide a string saying what the option is for.



---



Today I tried something a bit different: I unified the documentation and the tests for command line arguments.    The simple examples were tests, automated of course.   More complicated tests were interleaved with the simple examples. Much more complicated outlined.



Rather like a literate programming approach.   Later, I googled refs such as that attached below.



I am quite pleased with this approach.  



---



In an ideal world one might automate all three: getopt codegen, help docs, and tests.  



Although one then worries about common mode errors and single points of failure or untestedness.



But it turns out that it is not that hard adding the getopt code.  Especially if too complicated, to many validation checks for regular getopt.   It is the creation of the documentation and the tests that is a pain.  If the tests are automated, the codegen is straightforward.





---



See related:





Getopt::Euclid - search.cpan.org:



Combining Literate Programming and Functional Unit Tests to Augment the Workflow and Documentation in Software Development Projects - DBIS EPub: "Combining Literate Programming and Functional Unit Tests"