It looks like VirtualBox guestcontrol makes it easier to pass command line arguments and environment variables to an application started on the guest.
Unfortunately, Parallels does not.
More and more I regret having chosen Parallels instead of VirtualBox.
I will switch as soon as possible.
VBoxManage guestcontrol start [--exe] [--putenv][--unquoted-args]
Disclaimer
The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.
See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.
See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.
Friday, December 23, 2016
Wednesday, November 30, 2016
Your most important passwords are probably your weakest passwords
Have you ever noticed that your most important passwords are your weakest passwords?
Most of my passwords are random 20-24-32 Letter+Number*Symbol=Passwords. Stored in a password manager, because I can't remember them. Several hundred, different for each site. For that matter the account names and email are mostly different. Automatically entered into websites when I say okay. I am less happy when I have to cut and paste passwords, because clipboards can be a security hole. Anyway, not only do I not need to remember these passwords, but I don't have to type them in.
So, consider the passwords left over.
First, (1) the password for the password manager itself. My most important password. Because I have to remember it, and type it on at least 2 different keyboards - phone, laptop - it probably has less entropy than most of the passwords in my password manager.
Worse: it is long enough and hard enough to type that I have more than once hit "show me my password as I type it", when repeated tries fail. So any camera looking over my shoulder may have captured it from the screen. Like a security camera in an airport.
Of course, even without "show me my password", a camera may see your typing.
Change it frequently, but then entry errors rise.
When I have trouble entering my password usually arises on my iPhone keyboard. Good passwords are easier to type on a full keyboard. Not only are mobile phone keyboards, and in particular Apple's iPhone keyboards, cramped and likely to produce wrong key errors - but you also have to shift to get numbers and symbols. Sometimes multiple shifts.
Oh, and you probably should have audible keyclicks turned off. Have you noticed that Apple provides a different click for the shift key that changes between lowercase, uppercase, numbers, and symbols? I am sure anyone with a microphone can record that and greatly reduce the password search space. Even without different clicks, inter-key delay provides a lot of attack info.
Recently I have had to do a factory reset and reinstall from scratch on my Apple iPhone 3 times within the same month. (I think/hope the iPhone flash storage has errors - or else the iOS apps are full of bugs that may become security holes.)
Doing this has driven home how many times you have to type in (2) the password for the device itself, and (3) Apple's iCloud password.
Now, device passwords, such as for your phone or tablet or laptop, of necessity need to be typed in a lot. One of the best things about fingerprints is that, ideally, you can use the fingerprint to reduce the number of times you have to type in the long password - and hence make the full password stronger. My password manager does that. But... Apple does not. At least not for 48 hours or next power-cycle.
So, we will give device passwords a pass. Ideally, you have them physically secure, and you aren't rlogin-ing in to them. Ideally, there is a different, stronger, password for remote access...
Moving on to (3), the Apple iCloud password, and other cloud passwords. You have to type it in a lot, not only during install, but also when installing apps. Plus Apple, in their infinite wisdom, has made it difficult to use password managers with it. (Note: I don't use Apple Keychain much.)
So, the Apple iCloud password is arguably comparable in importance to your iPhone password, and more vulnerable. More vulnerable, because the iCloud password can be entered by an attacker into Apple's webpages, i.e. it can be entered remotely. Only arguably comparable in importance, because while the iCloud password controls a lot of stuff, your device password probably controls access to some of your 2-factor authentication.
Those are probably my most important passwords:
(1) password manager
(2) device
(3) iCloud.
Interestingly, my Google password is not in this list in category (3), even when I use an Android device. Google seems to be friendlier to password managers than Apple is, probably because of its web.site DNA.
My Microsoft password may be in category (3). I don't use it enough to be sure, although from one situation in a Microsoft store I think they have made some of their services uncooperative to password managers, and hence encouraging of weak passwords.
Finally, (4) my company password. Often entered, often uncooperative with password managers, e.g. in Microsoft Windows login, Exchange, VPN, Perforce. Often enough that I have simply had to memorize it - and it is therefore weaker than I would like. It doesn't piss me off as much, because it's the company's secrets that are at risk, not so much my own. I have encouraged IT to use a better password system, friendlier to password managers -and IT's response has been to require that the password be changed more often. Which makes it harder to remember. Which encourages me to weaken the password.
---
Now, having listed these, I have probably opened myself up to attack. Blargh.
---
Backing up: it is harder to enter a good password on a mobile device touch keyboard than on a physical keyboard.
Key clicks are bad. Different keyclicks for different keys are really bad, even if just "clack" for shift key and "click" for all other keys.
But even without keyclicks, shift keys make it harder to enter good passwords.
Ideally, high entropy passwords would be a random combination of A-Za-z0-9~`!@#$%^&*()_-+={}[]||"'<>,?/:;. (I don't think many systems allow control characters in passwords. Non-English unicode? Notice that your iPhone keyboard has characters that are a pain to type on a USASCII keyboard? How about emoji?)
Uniformly weighted.
(With easy to attack patterns pulled out - e.g. all 0s can be produced by a random password generator. But I still would not use it as a password.)
Roughly speaking, there are circa 4*26 characters for passwords. If you have a lowercase letter, there's a 75% chance that you will have to type a shift key. And so on. Roughly speaking, you would have to hit 1.75 keys for every random character in your password - and that does not even include the times you have to hit number shift then symbol shift.
I posit that part of the difficulty of typing a password is the number of keys you have to hit. The raw physical activity. Not just the memorization. So if truly random passwords require 1.75 keys per character, I posit that users may prefer to use passwords that are 4/7s the length of what they might use on a keyboard that required fewer shifts. (Note: I think physical keyboards are less onerous in this regards than thumb keyboards.)
E.g. instead of 28 characters, Apple's crippled keyboard might lead to users creating passwords that are only 16 characters long. 21->12. 14->... no, that's horrible!!!
Do the math: the longer password from the smaller alphabet can be a win. But 1.75x is probably an overestimate for the increased difficulty.
I posit that, for the passwords you have to enter on Apple iPhone keyboard, you might be wise to reduce the frequency of shifts. Not eliminate them. And not fixed length groups of the same shift. But perhaps pull from a distribution that does not create quite so many shifts. Where the average touches per character of the password is more than 1, but less than 1.75.
And use a haystack.
Perhaps random password generators should take this into account: the typing efficiency of the password. On what is probably the worst keyboard for typing, the Apple iPhone.
===
Of course, the real fix to the problem of passwords is to get rid of most passwords.
In the 1990s I wrote up an invention disclosure for my then employer, Intel, for what I called a "security amulet". I don't think Intel did anything with it.
The basic idea was to have something you wear. Like an amulet around your neck, or a watch. Possibly surgically implanted. Physical security being part of it.
The security amulet would be net.connected. Your amulet's address would be registered as part of your identity. When you try to log in to a website, the website contacts your security amulet. The amulet asks you "Do you want to log in to your bank?" You confirm, or deny.
The amulet could store passwords. Or a time varying code like Google Authenticator. Better yet if it does challenge response, public/private key style.
The thing you are trying to log into could connect over the net. Or you could be disconnected from the net, and logging into a device locally, without going through the net. E.g. bluetooth - back in the day, I liked body area networks, e.g. skin conductivity, between amulet and keyboard. Or you could do both: triangle device<--internet-->website<--internet-->amulet->localnet<-->device. Verify not just that somebody in possession of your amulet approves, but that the amulet is also physically close to the device where the action is taking place. (Unless spoofed, of course. Time delay?)-->--internet-->--internet-->
You authenticate to your amulet... howsoever you want. Some amulets might require you to type a password in once a day. Some might use biometrics like fingerprint. Some might monitor your pulse, to detect when you have taken the amulet off. Some might check DNA. Some might do nothing. The point is, once the protocols between device, service, and amulet are established, then innovation can happen between the user and the amulet. Whereas nowadays we are all constrained by what Google, etc, accept. The largely time based authenticator apps are better than passwords. Watch authenticator apps are better still. But still not there.
Back in the 1990s too much infrastructure was needed. There was no standard way to talk to a security amulet. Mobile was still analog. The Bluetooth SIG started in 1998. People thought that I was crazy for wanting public key in a watch-like device.
But all of these pieces are in place nowadays. The missing piece is that Google Authenticator still expects you to type in a code. But we now have push authentication, which scares me because the user interaction is so trivial, and hence insecure. Especially on a phone, which can be easily misplaced, and easily unlocked given fingerprints.
What I want today: push authentication to my watch. And time based on my watch. Etc.
Most of my passwords are random 20-24-32 Letter+Number*Symbol=Passwords. Stored in a password manager, because I can't remember them. Several hundred, different for each site. For that matter the account names and email are mostly different. Automatically entered into websites when I say okay. I am less happy when I have to cut and paste passwords, because clipboards can be a security hole. Anyway, not only do I not need to remember these passwords, but I don't have to type them in.
So, consider the passwords left over.
First, (1) the password for the password manager itself. My most important password. Because I have to remember it, and type it on at least 2 different keyboards - phone, laptop - it probably has less entropy than most of the passwords in my password manager.
Worse: it is long enough and hard enough to type that I have more than once hit "show me my password as I type it", when repeated tries fail. So any camera looking over my shoulder may have captured it from the screen. Like a security camera in an airport.
Of course, even without "show me my password", a camera may see your typing.
Change it frequently, but then entry errors rise.
When I have trouble entering my password usually arises on my iPhone keyboard. Good passwords are easier to type on a full keyboard. Not only are mobile phone keyboards, and in particular Apple's iPhone keyboards, cramped and likely to produce wrong key errors - but you also have to shift to get numbers and symbols. Sometimes multiple shifts.
Oh, and you probably should have audible keyclicks turned off. Have you noticed that Apple provides a different click for the shift key that changes between lowercase, uppercase, numbers, and symbols? I am sure anyone with a microphone can record that and greatly reduce the password search space. Even without different clicks, inter-key delay provides a lot of attack info.
Recently I have had to do a factory reset and reinstall from scratch on my Apple iPhone 3 times within the same month. (I think/hope the iPhone flash storage has errors - or else the iOS apps are full of bugs that may become security holes.)
Doing this has driven home how many times you have to type in (2) the password for the device itself, and (3) Apple's iCloud password.
Now, device passwords, such as for your phone or tablet or laptop, of necessity need to be typed in a lot. One of the best things about fingerprints is that, ideally, you can use the fingerprint to reduce the number of times you have to type in the long password - and hence make the full password stronger. My password manager does that. But... Apple does not. At least not for 48 hours or next power-cycle.
So, we will give device passwords a pass. Ideally, you have them physically secure, and you aren't rlogin-ing in to them. Ideally, there is a different, stronger, password for remote access...
Moving on to (3), the Apple iCloud password, and other cloud passwords. You have to type it in a lot, not only during install, but also when installing apps. Plus Apple, in their infinite wisdom, has made it difficult to use password managers with it. (Note: I don't use Apple Keychain much.)
So, the Apple iCloud password is arguably comparable in importance to your iPhone password, and more vulnerable. More vulnerable, because the iCloud password can be entered by an attacker into Apple's webpages, i.e. it can be entered remotely. Only arguably comparable in importance, because while the iCloud password controls a lot of stuff, your device password probably controls access to some of your 2-factor authentication.
Those are probably my most important passwords:
(1) password manager
(2) device
(3) iCloud.
Interestingly, my Google password is not in this list in category (3), even when I use an Android device. Google seems to be friendlier to password managers than Apple is, probably because of its web.site DNA.
My Microsoft password may be in category (3). I don't use it enough to be sure, although from one situation in a Microsoft store I think they have made some of their services uncooperative to password managers, and hence encouraging of weak passwords.
Finally, (4) my company password. Often entered, often uncooperative with password managers, e.g. in Microsoft Windows login, Exchange, VPN, Perforce. Often enough that I have simply had to memorize it - and it is therefore weaker than I would like. It doesn't piss me off as much, because it's the company's secrets that are at risk, not so much my own. I have encouraged IT to use a better password system, friendlier to password managers -and IT's response has been to require that the password be changed more often. Which makes it harder to remember. Which encourages me to weaken the password.
---
Now, having listed these, I have probably opened myself up to attack. Blargh.
---
Backing up: it is harder to enter a good password on a mobile device touch keyboard than on a physical keyboard.
Key clicks are bad. Different keyclicks for different keys are really bad, even if just "clack" for shift key and "click" for all other keys.
But even without keyclicks, shift keys make it harder to enter good passwords.
Ideally, high entropy passwords would be a random combination of A-Za-z0-9~`!@#$%^&*()_-+={}[]||"'<>,?/:;. (I don't think many systems allow control characters in passwords. Non-English unicode? Notice that your iPhone keyboard has characters that are a pain to type on a USASCII keyboard? How about emoji?)
Uniformly weighted.
(With easy to attack patterns pulled out - e.g. all 0s can be produced by a random password generator. But I still would not use it as a password.)
Roughly speaking, there are circa 4*26 characters for passwords. If you have a lowercase letter, there's a 75% chance that you will have to type a shift key. And so on. Roughly speaking, you would have to hit 1.75 keys for every random character in your password - and that does not even include the times you have to hit number shift then symbol shift.
I posit that part of the difficulty of typing a password is the number of keys you have to hit. The raw physical activity. Not just the memorization. So if truly random passwords require 1.75 keys per character, I posit that users may prefer to use passwords that are 4/7s the length of what they might use on a keyboard that required fewer shifts. (Note: I think physical keyboards are less onerous in this regards than thumb keyboards.)
E.g. instead of 28 characters, Apple's crippled keyboard might lead to users creating passwords that are only 16 characters long. 21->12. 14->... no, that's horrible!!!
Do the math: the longer password from the smaller alphabet can be a win. But 1.75x is probably an overestimate for the increased difficulty.
I posit that, for the passwords you have to enter on Apple iPhone keyboard, you might be wise to reduce the frequency of shifts. Not eliminate them. And not fixed length groups of the same shift. But perhaps pull from a distribution that does not create quite so many shifts. Where the average touches per character of the password is more than 1, but less than 1.75.
And use a haystack.
Perhaps random password generators should take this into account: the typing efficiency of the password. On what is probably the worst keyboard for typing, the Apple iPhone.
===
Of course, the real fix to the problem of passwords is to get rid of most passwords.
In the 1990s I wrote up an invention disclosure for my then employer, Intel, for what I called a "security amulet". I don't think Intel did anything with it.
The basic idea was to have something you wear. Like an amulet around your neck, or a watch. Possibly surgically implanted. Physical security being part of it.
The security amulet would be net.connected. Your amulet's address would be registered as part of your identity. When you try to log in to a website, the website contacts your security amulet. The amulet asks you "Do you want to log in to your bank?" You confirm, or deny.
The amulet could store passwords. Or a time varying code like Google Authenticator. Better yet if it does challenge response, public/private key style.
The thing you are trying to log into could connect over the net. Or you could be disconnected from the net, and logging into a device locally, without going through the net. E.g. bluetooth - back in the day, I liked body area networks, e.g. skin conductivity, between amulet and keyboard. Or you could do both: triangle device<--internet-->website<--internet-->amulet->localnet<-->device. Verify not just that somebody in possession of your amulet approves, but that the amulet is also physically close to the device where the action is taking place. (Unless spoofed, of course. Time delay?)-->--internet-->--internet-->
You authenticate to your amulet... howsoever you want. Some amulets might require you to type a password in once a day. Some might use biometrics like fingerprint. Some might monitor your pulse, to detect when you have taken the amulet off. Some might check DNA. Some might do nothing. The point is, once the protocols between device, service, and amulet are established, then innovation can happen between the user and the amulet. Whereas nowadays we are all constrained by what Google, etc, accept. The largely time based authenticator apps are better than passwords. Watch authenticator apps are better still. But still not there.
Back in the 1990s too much infrastructure was needed. There was no standard way to talk to a security amulet. Mobile was still analog. The Bluetooth SIG started in 1998. People thought that I was crazy for wanting public key in a watch-like device.
But all of these pieces are in place nowadays. The missing piece is that Google Authenticator still expects you to type in a code. But we now have push authentication, which scares me because the user interaction is so trivial, and hence insecure. Especially on a phone, which can be easily misplaced, and easily unlocked given fingerprints.
What I want today: push authentication to my watch. And time based on my watch. Etc.
Friday, October 28, 2016
Bash trap DEBUG does not work inside shell functions - not even to print
Bash Reference Manual:
trap
If a sigspec is DEBUG, the command arg is executed before every simple command, for command, case command, select command, every arithmetic for command, and before the first command executes in a shell function. Refer to the description of the extdebug option to the shopt builtin (see The Shopt Builtin) for details of its effect on the DEBUG trap.
'via Blog this'This bit me today:
It seems that
- not only does the bash DEBUG trap not run for commands inside shell functions, but only once at the start
- which is not unreasonable as a default
- especially if shopt extdebug allowed you to "trap DEBUG into" shell functions (which I have so far not been able to make work)
- but "trap ... DEBUG" is actually disabled inside shell functions
- which means that you cannot have a shell function that formats the current trap status nicely
- or which tries to add a "trap ... DEBUG" handler to its caller
Unless I can get shopt extdebug to work as advertised - well, at least, as I tyhinik it is advertised - there is a loss of abstraction.
Kluge: pass in the old trap handler setup, compute new, apply outside function.
:-(
A shell session.
Mac: trap
trap -- 'shell_session_update' EXIT
It looks like Apple sets up a trap handler by default.
Mac: trap 'echo DEBUG' DEBUG
DEBUG
Setting up a trivial (but annoying) DEBUG trap handler.
Run once before any command is run.
Mac:
DEBUG
Or even when ENTER is pressed to get a new prompt
Mac: echo xx
DEBUG
xx
DEBUG
Run twice as set up above, once for the command,
once for the exit handler
Now trying to run trap in a function
Mac: trap
DEBUG
trap -- 'shell_session_update' EXIT
trap -- 'echo DEBUG' DEBUG
DEBUG
Mac: foo() {
> trap
> }
DEBUG
Mac: foo
DEBUG
trap -- 'shell_session_update' EXIT
DEBUG
The shell function can execute the trap builtin
but the trap DEBUG handler was disabled
Tuesday, October 25, 2016
Pebble, dynamic app swapping
Pebble | Updated Software for Classic Pebbles: "No more 8 app limit"I have the original Pebble.
'via Blog this'
Originally, this Pebble classic had an 8-app limit. You could not have more than 8-apps. This was limiting, but I could live with it. Although I have played with many apps, I only really use two apps regularly:
(1) the Misfit step counter and sleep tracker - which is supposed to run in the background
(2) the "smart" Gentle Alarm app - which has made waking up almost pleasabnt, because it finds a good time in my sleep cycle to wake me up.
These fit comfortably in the original 8-app limit.
The Pebble SW upgrade that removed the 8-app limit at first seemed not to affect these. But recently... :-(
- Repeatedly, including this morning, I switched to my supposedly-always-running Misfit step counter - and I got the progress bar that indicates that the app is not resident on the watch, and is being swapped back in from my phone.
- At least this morning I had my phone with me. Unfortunately, I had already walked quite distance, so missed steps
- Other times my phone not with me: I often do NOT carry my phone with me on my morning dog walks. So the step counter app hung. (I live in a canyon - no signal, so it is useless as a phone on my walks. I only carry it occasionally for a podcast player)
- One morning I woke up late - the Gentle Alarm app on my Pebble watch had not gone off - and I saw the progress bar that indicates that the app is not resident on the watch, and is trying to swap back in. But I had forgotten to charge my phone, so the watch app was hung trying to swap back in.
This is very unfortunate. The only two apps that I really want on my watch depend on being present all the time. There appears to be no way to guarantee that they will not be swapped out.
I have deleted all unnecessary apps. Factory reset. But I still seem to get this swapping in and out. (I conjecture that Pebble's default apps may swap them out, e.g. for notifications.)
This may be my last straw for the Pebble. I have been considering buying a FitBit Blaze, even though it lacks smart alarms. It is sad that software upgrades kill my usage model.
Rant:
Far too many folks assume that everyone always carries their phone with them. False! Certainly false in the fitness tracker and alarm space.
Apple iPhone and Google Android apps have similarly started to be swapped or paged in from the cloud. Unfortunately, when you live in a place that does not always have connectivity, sometimes the apps do not work, even though they do not require connectivity except to page. Even when connected, the apps are slower.
Thursday, September 29, 2016
Exchange Requires iPhone to Auto-lock
I just wasted 20 minutes trying to prevent my iPad from locking so quickly.
Since I was using my iPad for Quadro, as a UIX user interface extension, with macro keys, it sucked to have to unlock it so frequently.
Since I hardly ever use my iPad for Exchange, I completely removed the Exchange accounts.
I should try to see if I can set up conditional autolock - autolock after 15 minutes when linked in via USB to Quadro on my MacBook, else autolock after 2 minutes, etc.
This is one place where face recognition might be good: don't autolock if my face has been continuously in front of iPad...
(I don't want to login using just my face.)
Exchange Requires iPhone to Auto-lock: "Exchange Requires iPhone to Auto-lock"
'via Blog this'
Since I was using my iPad for Quadro, as a UIX user interface extension, with macro keys, it sucked to have to unlock it so frequently.
Since I hardly ever use my iPad for Exchange, I completely removed the Exchange accounts.
I should try to see if I can set up conditional autolock - autolock after 15 minutes when linked in via USB to Quadro on my MacBook, else autolock after 2 minutes, etc.
This is one place where face recognition might be good: don't autolock if my face has been continuously in front of iPad...
(I don't want to login using just my face.)
Exchange Requires iPhone to Auto-lock: "Exchange Requires iPhone to Auto-lock"
'via Blog this'
Monday, September 26, 2016
MODULES FOR XUNIT TESTING IN PERL
Test::Class : "MODULES FOR XUNIT TESTING IN PERL"
'via Blog this'
Not a bad summary:
Test::Unit is a port of JUnit into Perl. Familiar to xUnit users.
Test::Class
Much like xUnit. xUnit inspired. Unfamiliar names muck up porting, but can live with that.
Plays well with traditional Perl test tools like TAP and Test::Builder.
Somewhat object oriented. But
which are free functions, so somewhat annoying to extend (e.g. to report error location accurately, when you build meta-tests that call multiple asserts internally). But you can access the underlying Test::Builder routines.
Uses :Test attribute so that introspection can find test functions to rub, including setup/teardown.
Pleasant - many folks have had to add such "keep running" behaviour to xUnit.
Test::Unit
Test::Unit is a port of JUnit http://www.junit.org/ into perl. If you have used JUnit then the Test::Unit framework should be very familiar.
It is class based so you can easily reuse your test classes and extend by subclassing. You get a nice flexible framework you can tweak to your heart's content. If you can run Tk you also get a graphical test runner. However, Test::Unit is not based on Test::Builder. You cannot easily move Test::Builder based test functions into Test::Unit based classes. You have to learn another test assertion API.
Test::Unit implements it's own testing framework separate from Test::Harness. You can retrofit *.t scripts as unit tests, and output test results in the format that Test::Harness expects, but things like todo tests and skipping tests are not supported.
But... the Test::Case author does not say that Test::Unit is mostly abandoned as odf 2016, possibly since 2011 or before.
Test::SimpleUnit
A very simple unit testing framework. If you are looking for a lightweight single module solution this might be for you. The advantage of Test::SimpleUnit is that it is simple! Just one module with a smallish API to learn. Of course this is also the disadvantage.
It's not class based so you cannot create testing classes to reuse and extend. It doesn't use Test::Builder so it's difficult to extend or integrate with other testing modules. If you are already familiar with Test::Builder, Test::More and friends you will have to learn a new test assertion API. It does not support todo tests.
'via Blog this'
Not a bad summary:
Test::Unit is a port of JUnit into Perl. Familiar to xUnit users.
Test::Class does not provide its own test functions, but uses those provided by Test::More and friends
Unlike JUnit the test functions supplied by Test::More et al do not throw exceptions on failure. They just report the failure to STDOUT where it is collected by Test::Harness. This means that where you have
sub foo : Test(2) { ok($foo->method1); ok($foo->method2); ok($foo->method3) or die "method3 test failure"; ok($foo->method4); }
The second test will run if the first one fails But the third will stop the fourth from running.
It is class based so you can easily reuse your test classes and extend by subclassing. You get a nice flexible framework you can tweak to your heart's content. If you can run Tk you also get a graphical test runner. However, Test::Unit is not based on Test::Builder. You cannot easily move Test::Builder based test functions into Test::Unit based classes. You have to learn another test assertion API.
Test::Unit implements it's own testing framework separate from Test::Harness. You can retrofit *.t scripts as unit tests, and output test results in the format that Test::Harness expects, but things like todo tests and skipping tests are not supported.
But... the Test::Case author does not say that Test::Unit is mostly abandoned as odf 2016, possibly since 2011 or before.
It's not class based so you cannot create testing classes to reuse and extend. It doesn't use Test::Builder so it's difficult to extend or integrate with other testing modules. If you are already familiar with Test::Builder, Test::More and friends you will have to learn a new test assertion API. It does not support todo tests.
MODULES FOR XUNIT TESTING IN PERL
Test::Class : "MODULES FOR XUNIT TESTING IN PERL"
'via Blog this'
Not a bad summary:
Test::Unit is a port of JUnit into Perl. Familiar to xUnit users.
Test::Class
Much like xUnit. xUnit inspired. Unfamiliar names muck up porting, but can live with that.
Plays well with traditional Perl test tools like TAP and Test::Builder.
Somewhat object oriented. But
which are free functions, so somewhat annoying to extend (e.g. to report error location accurately, when you build meta-tests that call multiple asserts internally). But you can access the underlying Test::Builder routines.
Uses :Test attribute so that introspection can find test functions to rub, including setup/teardown.
Pleasant - many folks have had to add such "keep running" behaviour to xUnit.
Test::Unit
Test::Unit is a port of JUnit http://www.junit.org/ into perl. If you have used JUnit then the Test::Unit framework should be very familiar.
It is class based so you can easily reuse your test classes and extend by subclassing. You get a nice flexible framework you can tweak to your heart's content. If you can run Tk you also get a graphical test runner. However, Test::Unit is not based on Test::Builder. You cannot easily move Test::Builder based test functions into Test::Unit based classes. You have to learn another test assertion API.
Test::Unit implements it's own testing framework separate from Test::Harness. You can retrofit *.t scripts as unit tests, and output test results in the format that Test::Harness expects, but things like todo tests and skipping tests are not supported.
But... the Test::Case author does not say that Test::Unit is mostly abandoned as odf 2016, possibly since 2011 or before.
Test::SimpleUnit
A very simple unit testing framework. If you are looking for a lightweight single module solution this might be for you. The advantage of Test::SimpleUnit is that it is simple! Just one module with a smallish API to learn. Of course this is also the disadvantage.
It's not class based so you cannot create testing classes to reuse and extend. It doesn't use Test::Builder so it's difficult to extend or integrate with other testing modules. If you are already familiar with Test::Builder, Test::More and friends you will have to learn a new test assertion API. It does not support todo tests.
'via Blog this'
Not a bad summary:
Test::Unit is a port of JUnit into Perl. Familiar to xUnit users.
Test::Class does not provide its own test functions, but uses those provided by Test::More and friends
Unlike JUnit the test functions supplied by Test::More et al do not throw exceptions on failure. They just report the failure to STDOUT where it is collected by Test::Harness. This means that where you have
sub foo : Test(2) { ok($foo->method1); ok($foo->method2); ok($foo->method3) or die "method3 test failure"; ok($foo->method4); }
The second test will run if the first one fails But the third will stop the fourth from running.
It is class based so you can easily reuse your test classes and extend by subclassing. You get a nice flexible framework you can tweak to your heart's content. If you can run Tk you also get a graphical test runner. However, Test::Unit is not based on Test::Builder. You cannot easily move Test::Builder based test functions into Test::Unit based classes. You have to learn another test assertion API.
Test::Unit implements it's own testing framework separate from Test::Harness. You can retrofit *.t scripts as unit tests, and output test results in the format that Test::Harness expects, but things like todo tests and skipping tests are not supported.
But... the Test::Case author does not say that Test::Unit is mostly abandoned as odf 2016, possibly since 2011 or before.
It's not class based so you cannot create testing classes to reuse and extend. It doesn't use Test::Builder so it's difficult to extend or integrate with other testing modules. If you are already familiar with Test::Builder, Test::More and friends you will have to learn a new test assertion API. It does not support todo tests.
Wednesday, September 14, 2016
Are there any good uses for multiple Perl fat commas in series ( a => b => 1 )? - Stack Overflow
Are there any good uses for multiple Perl fat commas in series ( a => b => 1 )? - Stack Overflow: "Are there any good uses for multiple Perl fat commas in series ( a => b => 1 )?"
'via Blog this'
Making a copy of my own post.
**---+ BRIEF**
In addition to notation for graphs and paths (like Travelling Salesman, or critical path), multiple serial fat arrow/commas can be nice syntactic sugar for functions that you might call like
# Writing: creating $node->{a}->{b}->{c} if it does not already exist
assign_to_path($node=>a=>b=>c=>"value");
# Reading
my $cvalue = follow_path($node=>a=>b=>c=>"default value);
the latter being similar to
my $cvalue = ($node->{a}->{b}->{c})//"default value);
although you can do more stuff in a pointer chasing / hashref path following function than you can with //
It turned out that I already had such functions in my personal library, but I did not know that you could use `a=>b=>"value"` with them to make them look less ugly where used.
**---+ DETAIL**
I usually try not to answer my own questions on this forum, encouraging others to - but in this case, in addition to the contrived example I posted inside and shortly after the original question, I have since realized what I think is a completely legitimate use for multiple fat arrow/commas in series.
I would not complain if multiple fat arrows in series were disallowed, since they are quite often a real bug, but there are at least two places where they are appropriate.
**(1) Entering Graphs as Chains**
Reminder: my first, totally contrived, use case for multiple fat pointer/commas in series was to make it easier to enter certain graphs by using "chains". E.g. a classic deadlock graph would be, in pairs `{ 1=>2, 2=>1 }`, and as a "chain" `(1=>2=>1)`. If you want to show a graph that is one big cycle with a "chord" or shortcut, it might look like `([1=>2=>3=>4=>5=>6=>1],[3=>6])`.
Note that I used node numbers: if I wanted to use node names, I might have to do `(a=>b=>c=>undef)` to avoid having to quote the last node in a cycle `(a=>b=>"c")`. This is because of the implicit quote on the left hand but not the right hand argument. Since you have to but up with undef to support node names anyway, one might just "flatten" `([1=>2=>3=>4=>5=>6=>1],[3=>6])` to `([1=>2=>3=>4=>5=>6=>1=>undef,3=>6=>undef)`. In the former end of chain is indicated by end of array `[...]`. In the latter, by undef. Using undef makes all of the nodes at the left hand of a =>, so syntactically uniform.
I admit that tis is contrived - it was just the first thing that came to mind.
**(2) Paths as a data type**
Slightly less contrived: imagine that you are writing, using, or testing code that is seeking "paths" through a graph - e.g. Hamiltonians, Traveling Salesman, mapping, electronic circuit speed path analysis. For that matter, any critical path analysis, or data flow analysis.
I have worked in 4 of the 6 areas I just listed. Although I have never used Perl fat arrow/commas in such code (usually Perl is to slow for such code when I have been working on such tasks), I can certainly avow that, although it is GOOD ENOUGH to write (a,b,c,d,e) in a computer program, in my own notes I usually draw arrows (a->b->c->d->e). I think that it would be quite pleasant to be able to code it as `(a=>b=>c=>d=>e=>undef)`, even with the ugly undefs. `(a=>b=>c=>d=>e=>undef)` is preferable to `qw(a b c d e)`, if I were trying to make the code resemble my thinking.
"Trying to make the code resemble my thinking" is often what I am doing. I want to use the notations common to the problem area. Sometimes I will use a DSL, sometimes write my own, sometimes just write some string or text parsing routines But if a language like Perl has a syntax that looks almost familiar, that's less code to write.
By the way, in C++ I often express chains or paths as
Path p = Path()->start("a")->link_to("b")->link_to("c")->end("d");
This is unfortunately verbose, but it is almost self-explanatory.
Of course, such notations are just the programmer API: the actual data strcture is usually well hidden, and is seldom the linear linked list that the above implies.
Anyway - if I need to write such "path-manipulating" code in Perl, I may use `(a=>b=>c=>undef)` as a notation -- particularly when passed to a constructor like Path(a=>b=>c=>undef) which creates the actual data structure.
There might even be some slightly more pleasant ways of dealing with the non-quoting of the fit arrow/comma's right hand side: eg. sometimes I might use a code like 0 or -1 to indicate closed loops (cycles) or paths that are not yet complete: `Path(a=>b=>c=>0)` is a cycle, `Path(a=>b=>c=>-1)` is not. 0 rather looks like a closed loop. It is unfortunate that this would mean that you could not have numeric nodes. Or one might leverage more Perl syntax: `Path(a=>b=>c=>undef), Path(a=>b=>c=>[]), Path(a=>b=>c=>{})`.
All we are doing here is using the syntax of the programming language to create notations that resemble the notation of the problem domain.
**(3) Finally, a use case that is more "native Perl"-ish.**
Have you ever wanted to access `$node->{a}->{b}->{c}`, when it is not guaranteed that all of the elements of the path exist?
Sometimes one ends up writing code like
When writing:
$node = {} if not defined $node;
$node->{a} = {} if not exists $node->{a};
$node->{a}->{b} = {} if not exists $node->{a}->{b};
$node->{a}->{b}->{c} = 0;
When reading ... well, you can imagine. Before the introduction of the // operator, I would have been too lazy to enter it. With the // operator, such code might look like:
my $value = $node->{a}->{b}->{c}//"default value if the path is incomplete";
Yeah, yeah... one should never expose that much detail of the datastructure. Before writing code like the above, one should refactor to a nice set of object oriented APIs. Etc.
Nevertheless, when you have to deal with somebody else's Perl code, you may run into the above. Especially if that somebody else was an EE in a hurry, not a CS major.
Anyway: I have long had in my personal Perl library functions that encapsulate the above.
Historically, these have looked like:
assign_to_hash_path( $node, "a", "b", "c", 0 )
# sets $node->{a}->{b}->{c} = 0, creating all nodes as necessary
# can follow or create arbitrarily log chains
# the first argument is the base node,
# the last is the value
# any number of intermediate nodes are allowed.
or, more obviously an assignment:
${hash_path_lhs( $node, "a", "b", "c")} = 0
# IIRC this is how I created a left-hand-side
# by returning a ref that I then dereffed.
and for reading (now usually // for simple cases):
my $cvalue = follow_hash_path_undef_if_cannot( $node, "a", "b", "c" );
Since the simple case of reading is now usually //, it is worth mentioning less simple cases, e.g. in a simulator where you are creating (create, zero-fill, or copy-on-read), or possibly tracking stats or modifying state like LRU or history
my $cvalue = lookup( $bpred_top => path_history => $path_hash => undef );
my $cvalue = lookup( $bpred_top => gshare => hash($pc,$tnt_history) => undef );
Basically, these libraries are the // operator on steroids, with a wider selection of what to do is the full path does not exist (or even if it does exist, e.g. count stats and cache).
They are slightly more pleasant using the quote operators, e.g.
assign_to_hash_path( $node, qw{a b c}, 0);
${hash_path_lhs( $node, qw{a b c})} = 0;
my $cvalue = follow_hash_path_undef_if_cannot( $node, qw{a b c});
But now that it has sunk into my thick head after many years of using perlobj, I think that fat arrow/commas may make these look much more pleasant:
assign_to_hash_path( $node => a => b => c => 0);
my $cvalue = follow_hash_path( $node => a => b => c => undef );
Unfortunately, the LHS function doesn't improve much because of the need to quote the last element of such a path:
${hash_path_lhs( $node=>a=>b=>"c"} = 0;
${hash_path_lhs( $node=>a=>b=>c=>undef} = 0;
so I would be tempted to give up on LHS, or use some mandatory final argument, like
${hash_path_lhs( $node=>a=>b=>c, Create_As_Needed() ) = 0;
${hash_path_lhs( $node=>a=>b=>c, Die_if_Path_Incomplete() ) = 0;
The LHS code looks ugly, but the other two look pretty good, expecting that the final element of such a chain would either be the value to be assigned, or the default value.
assign_to_hash_path( $node => a => b => c => "value-to-be-assigned");
my $cvalue = follow_hash_path( $node => a => b => c => "default-value" );
Unfortunately, there is no obvious place to hand keyword options - the following does not work because you cannot distinguish optional keywords from args, at either beginning or end:
assign_to_hash_path( $node => a => b => c => 0);
assign_to_hash_path( {warn_if_path_incomplete=>1}, $node => a => b => c => 0);
my $cvalue = follow_hash_path( $node => a => b => c => undef );
my $cvalue = follow_hash_path( $node => a => b => c => undef, {die_if_path_incomplete=>1} );
I have occasionally used a Keyword class, abbreviated KW, so that a type inquiry can tell us which is the keyword, but that is suboptimal - actually, it's not bad, but it is just that Perl has no single BKM (yeah, TMTOWTDI):
assign_to_hash_path( $node => a => b => c => 0);
assign_to_hash_path( KW(warn_if_path_incomplete=>1), $node => a => b => c => 0);
my $cvalue = follow_hash_path( $node => a => b => c => undef );
my $cvalue = follow_hash_path( KW(die_if_path_incomplete=>1), $node => a => b => c => undef );
my $value = follow_hash_path( $node => a => b => c => undef, KW(die_if_path_incomplete=>1) );
**Conclusion: Foo(a=>b=>c=>1) seems strange, but might be useful/nice syntactic sugar**
So: while I do rather wish that `use warnings` had warned me about `foo(a=>a=>1)`, when a keyword was duplicated by accident, I think that multiple fat arrow/commas in series might be useful in making some types of code more readable.
Although I haven't seen any real-world examples of this, usually if I can imagine something, a better and more perspicacious Perl programmer has already written it.
And I am considering reworking some of my legacy libraries to use it. In fact, I may not have to rework - the library that I designed to be called as
assign_to_hash_path( $node, "a", "b", "c", 0 )
may already work if invoked as
assign_to_hash_path( $node => a => b=> c => 0 )
**Simple Working Example**
For grins, an example of a simple path following function, that does a bit more error reporting than is convenient to do with //
$ bash 1278 $> cat example-Follow_Hashref_Path.pl
use strict;
use warnings;
sub follow_path {
my $node=shift;
if( ref $node ne 'HASH' ) {
print "Error: expected \$node to be a ref HASH,"
." instead got ".(
ref $node eq ''
?"scalar $node"
:"ref ".(ref $node))
."\n";
return;
}
my $path=q{node=>};
my $full_path = $path . join('=>',@_);
foreach my $field ( @_ ) {
$path.="->{$field}";
if( not exists $node->{$field} ) {
print "stopped at path element $field"
."\n full_path = $full_path"
."\n path so far = $path"
."\n";
return;
}
$node = $node->{$field}
}
}
my $node={a=>{b=>{c=>{}}}};
follow_path($node=>a=>b=>c=>"end");
follow_path($node=>A=>b=>c=>"end");
follow_path($node=>a=>B=>c=>"end");
follow_path($node=>a=>b=>C=>"end");
follow_path({}=>a=>b=>c=>"end");
follow_path(undef=>a=>b=>c=>"end");
follow_path('string-value'=>a=>b=>c=>"end");
follow_path('42'=>a=>b=>c=>"end");
follow_path([]=>a=>b=>c=>"end");
and use:
$ perl example-Follow_Hashref_Path.pl
stopped at path element end
full_path = node=>a=>b=>c=>end
path so far = node=>->{a}->{b}->{c}->{end}
stopped at path element A
full_path = node=>A=>b=>c=>end
path so far = node=>->{A}
stopped at path element B
full_path = node=>a=>B=>c=>end
path so far = node=>->{a}->{B}
stopped at path element C
full_path = node=>a=>b=>C=>end
path so far = node=>->{a}->{b}->{C}
stopped at path element a
full_path = node=>a=>b=>c=>end
path so far = node=>->{a}
Error: expected $node to be a ref HASH, instead got scalar undef
Error: expected $node to be a ref HASH, instead got scalar string-value
Error: expected $node to be a ref HASH, instead got scalar 42
Error: expected $node to be a ref HASH, instead got ref ARRAY
✓
$
$ bash 1291 $> perl -e 'use warnings;my $node={a=>{b=>{c=>"end"}}}; print "followed path to the ".($node->{a}->{B}->{c}//"premature end")."\n"'
followed path to the premature end
$ bash 1292 $> perl -e 'use warnings;my $node={a=>{b=>{c=>"end"}}}; print "followed path to the ".($node->{a}->{b}->{c}//"premature end")."\n"'
followed path to the end
I admit that I have trouble keeping the binding strength of // in my head.
By the way, if anyone has examples of idioms using `//` and `->` that avoid the need to create library functions, especially for writes, I'd love to hear of them.
It's good to be able to create libraries to make stuff easier or more pleasant.
It is also good not to need to do so - as in `($node->{a}->{B}->{c}//"default")`.
Inspires EVIL PERL TRICK print Do => You =>Like=>Barewords=>
May not want to be associated with such evil.
I have often thought that the reason that we don't actually use Perl as our interactive shell like bash is that bash defaults to barewords, whereas Perl usually requires quotes.
Methinks that it should be possible to create a single language that with the same keywords and operators, that can be turned "inside out":
One mode where strings require quotes:
var a ="string-value"
a second mode where things are string by default, and it is the keywords and syntax that needs to be quoted (here by {}):
{var} {a} {=} string-value
The latter might be useful in literate programming. Same programing language constructs, just inverted. Although the embedded prtogramming language syntax might be most l;ike Perl interpolation - might need different quotes for code producing a value within the text, and code operating on the text./
The minimal aspect of command line shells, for the most part, is a hybrid: the first word on a line is special, a command - everything else is strings by default.
'via Blog this'
Making a copy of my own post.
It sucks that can't cut and paste pseudo-formatted text between sites like StackOverflow and Blogger. Wasn't HTML supposed to solve that? Oh, yeah, scripting attacks. Won't bother to fix the formatting. (Started by hand, but must stop wasting time. So much time is wasted fixing the formatting when copying between tools.)
**---+ BRIEF**
In addition to notation for graphs and paths (like Travelling Salesman, or critical path), multiple serial fat arrow/commas can be nice syntactic sugar for functions that you might call like
# Writing: creating $node->{a}->{b}->{c} if it does not already exist
assign_to_path($node=>a=>b=>c=>"value");
# Reading
my $cvalue = follow_path($node=>a=>b=>c=>"default value);
the latter being similar to
my $cvalue = ($node->{a}->{b}->{c})//"default value);
although you can do more stuff in a pointer chasing / hashref path following function than you can with //
It turned out that I already had such functions in my personal library, but I did not know that you could use `a=>b=>"value"` with them to make them look less ugly where used.
**---+ DETAIL**
I usually try not to answer my own questions on this forum, encouraging others to - but in this case, in addition to the contrived example I posted inside and shortly after the original question, I have since realized what I think is a completely legitimate use for multiple fat arrow/commas in series.
I would not complain if multiple fat arrows in series were disallowed, since they are quite often a real bug, but there are at least two places where they are appropriate.
**(1) Entering Graphs as Chains**
Reminder: my first, totally contrived, use case for multiple fat pointer/commas in series was to make it easier to enter certain graphs by using "chains". E.g. a classic deadlock graph would be, in pairs `{ 1=>2, 2=>1 }`, and as a "chain" `(1=>2=>1)`. If you want to show a graph that is one big cycle with a "chord" or shortcut, it might look like `([1=>2=>3=>4=>5=>6=>1],[3=>6])`.
Note that I used node numbers: if I wanted to use node names, I might have to do `(a=>b=>c=>undef)` to avoid having to quote the last node in a cycle `(a=>b=>"c")`. This is because of the implicit quote on the left hand but not the right hand argument. Since you have to but up with undef to support node names anyway, one might just "flatten" `([1=>2=>3=>4=>5=>6=>1],[3=>6])` to `([1=>2=>3=>4=>5=>6=>1=>undef,3=>6=>undef)`. In the former end of chain is indicated by end of array `[...]`. In the latter, by undef. Using undef makes all of the nodes at the left hand of a =>, so syntactically uniform.
I admit that tis is contrived - it was just the first thing that came to mind.
**(2) Paths as a data type**
Slightly less contrived: imagine that you are writing, using, or testing code that is seeking "paths" through a graph - e.g. Hamiltonians, Traveling Salesman, mapping, electronic circuit speed path analysis. For that matter, any critical path analysis, or data flow analysis.
I have worked in 4 of the 6 areas I just listed. Although I have never used Perl fat arrow/commas in such code (usually Perl is to slow for such code when I have been working on such tasks), I can certainly avow that, although it is GOOD ENOUGH to write (a,b,c,d,e) in a computer program, in my own notes I usually draw arrows (a->b->c->d->e). I think that it would be quite pleasant to be able to code it as `(a=>b=>c=>d=>e=>undef)`, even with the ugly undefs. `(a=>b=>c=>d=>e=>undef)` is preferable to `qw(a b c d e)`, if I were trying to make the code resemble my thinking.
"Trying to make the code resemble my thinking" is often what I am doing. I want to use the notations common to the problem area. Sometimes I will use a DSL, sometimes write my own, sometimes just write some string or text parsing routines But if a language like Perl has a syntax that looks almost familiar, that's less code to write.
By the way, in C++ I often express chains or paths as
Path p = Path()->start("a")->link_to("b")->link_to("c")->end("d");
This is unfortunately verbose, but it is almost self-explanatory.
Of course, such notations are just the programmer API: the actual data strcture is usually well hidden, and is seldom the linear linked list that the above implies.
Anyway - if I need to write such "path-manipulating" code in Perl, I may use `(a=>b=>c=>undef)` as a notation -- particularly when passed to a constructor like Path(a=>b=>c=>undef) which creates the actual data structure.
There might even be some slightly more pleasant ways of dealing with the non-quoting of the fit arrow/comma's right hand side: eg. sometimes I might use a code like 0 or -1 to indicate closed loops (cycles) or paths that are not yet complete: `Path(a=>b=>c=>0)` is a cycle, `Path(a=>b=>c=>-1)` is not. 0 rather looks like a closed loop. It is unfortunate that this would mean that you could not have numeric nodes. Or one might leverage more Perl syntax: `Path(a=>b=>c=>undef), Path(a=>b=>c=>[]), Path(a=>b=>c=>{})`.
All we are doing here is using the syntax of the programming language to create notations that resemble the notation of the problem domain.
**(3) Finally, a use case that is more "native Perl"-ish.**
Have you ever wanted to access `$node->{a}->{b}->{c}`, when it is not guaranteed that all of the elements of the path exist?
Sometimes one ends up writing code like
When writing:
$node = {} if not defined $node;
$node->{a} = {} if not exists $node->{a};
$node->{a}->{b} = {} if not exists $node->{a}->{b};
$node->{a}->{b}->{c} = 0;
When reading ... well, you can imagine. Before the introduction of the // operator, I would have been too lazy to enter it. With the // operator, such code might look like:
my $value = $node->{a}->{b}->{c}//"default value if the path is incomplete";
Yeah, yeah... one should never expose that much detail of the datastructure. Before writing code like the above, one should refactor to a nice set of object oriented APIs. Etc.
Nevertheless, when you have to deal with somebody else's Perl code, you may run into the above. Especially if that somebody else was an EE in a hurry, not a CS major.
Anyway: I have long had in my personal Perl library functions that encapsulate the above.
Historically, these have looked like:
assign_to_hash_path( $node, "a", "b", "c", 0 )
# sets $node->{a}->{b}->{c} = 0, creating all nodes as necessary
# can follow or create arbitrarily log chains
# the first argument is the base node,
# the last is the value
# any number of intermediate nodes are allowed.
or, more obviously an assignment:
${hash_path_lhs( $node, "a", "b", "c")} = 0
# IIRC this is how I created a left-hand-side
# by returning a ref that I then dereffed.
and for reading (now usually // for simple cases):
my $cvalue = follow_hash_path_undef_if_cannot( $node, "a", "b", "c" );
Since the simple case of reading is now usually //, it is worth mentioning less simple cases, e.g. in a simulator where you are creating (create, zero-fill, or copy-on-read), or possibly tracking stats or modifying state like LRU or history
my $cvalue = lookup( $bpred_top => path_history => $path_hash => undef );
my $cvalue = lookup( $bpred_top => gshare => hash($pc,$tnt_history) => undef );
Basically, these libraries are the // operator on steroids, with a wider selection of what to do is the full path does not exist (or even if it does exist, e.g. count stats and cache).
They are slightly more pleasant using the quote operators, e.g.
assign_to_hash_path( $node, qw{a b c}, 0);
${hash_path_lhs( $node, qw{a b c})} = 0;
my $cvalue = follow_hash_path_undef_if_cannot( $node, qw{a b c});
But now that it has sunk into my thick head after many years of using perlobj, I think that fat arrow/commas may make these look much more pleasant:
assign_to_hash_path( $node => a => b => c => 0);
my $cvalue = follow_hash_path( $node => a => b => c => undef );
Unfortunately, the LHS function doesn't improve much because of the need to quote the last element of such a path:
${hash_path_lhs( $node=>a=>b=>"c"} = 0;
${hash_path_lhs( $node=>a=>b=>c=>undef} = 0;
so I would be tempted to give up on LHS, or use some mandatory final argument, like
${hash_path_lhs( $node=>a=>b=>c, Create_As_Needed() ) = 0;
${hash_path_lhs( $node=>a=>b=>c, Die_if_Path_Incomplete() ) = 0;
The LHS code looks ugly, but the other two look pretty good, expecting that the final element of such a chain would either be the value to be assigned, or the default value.
assign_to_hash_path( $node => a => b => c => "value-to-be-assigned");
my $cvalue = follow_hash_path( $node => a => b => c => "default-value" );
Unfortunately, there is no obvious place to hand keyword options - the following does not work because you cannot distinguish optional keywords from args, at either beginning or end:
assign_to_hash_path( $node => a => b => c => 0);
assign_to_hash_path( {warn_if_path_incomplete=>1}, $node => a => b => c => 0);
my $cvalue = follow_hash_path( $node => a => b => c => undef );
my $cvalue = follow_hash_path( $node => a => b => c => undef, {die_if_path_incomplete=>1} );
I have occasionally used a Keyword class, abbreviated KW, so that a type inquiry can tell us which is the keyword, but that is suboptimal - actually, it's not bad, but it is just that Perl has no single BKM (yeah, TMTOWTDI):
assign_to_hash_path( $node => a => b => c => 0);
assign_to_hash_path( KW(warn_if_path_incomplete=>1), $node => a => b => c => 0);
my $cvalue = follow_hash_path( $node => a => b => c => undef );
my $cvalue = follow_hash_path( KW(die_if_path_incomplete=>1), $node => a => b => c => undef );
my $value = follow_hash_path( $node => a => b => c => undef, KW(die_if_path_incomplete=>1) );
**Conclusion: Foo(a=>b=>c=>1) seems strange, but might be useful/nice syntactic sugar**
So: while I do rather wish that `use warnings` had warned me about `foo(a=>a=>1)`, when a keyword was duplicated by accident, I think that multiple fat arrow/commas in series might be useful in making some types of code more readable.
Although I haven't seen any real-world examples of this, usually if I can imagine something, a better and more perspicacious Perl programmer has already written it.
And I am considering reworking some of my legacy libraries to use it. In fact, I may not have to rework - the library that I designed to be called as
assign_to_hash_path( $node, "a", "b", "c", 0 )
may already work if invoked as
assign_to_hash_path( $node => a => b=> c => 0 )
**Simple Working Example**
For grins, an example of a simple path following function, that does a bit more error reporting than is convenient to do with //
$ bash 1278 $> cat example-Follow_Hashref_Path.pl
use strict;
use warnings;
sub follow_path {
my $node=shift;
if( ref $node ne 'HASH' ) {
print "Error: expected \$node to be a ref HASH,"
." instead got ".(
ref $node eq ''
?"scalar $node"
:"ref ".(ref $node))
."\n";
return;
}
my $path=q{node=>};
my $full_path = $path . join('=>',@_);
foreach my $field ( @_ ) {
$path.="->{$field}";
if( not exists $node->{$field} ) {
print "stopped at path element $field"
."\n full_path = $full_path"
."\n path so far = $path"
."\n";
return;
}
$node = $node->{$field}
}
}
my $node={a=>{b=>{c=>{}}}};
follow_path($node=>a=>b=>c=>"end");
follow_path($node=>A=>b=>c=>"end");
follow_path($node=>a=>B=>c=>"end");
follow_path($node=>a=>b=>C=>"end");
follow_path({}=>a=>b=>c=>"end");
follow_path(undef=>a=>b=>c=>"end");
follow_path('string-value'=>a=>b=>c=>"end");
follow_path('42'=>a=>b=>c=>"end");
follow_path([]=>a=>b=>c=>"end");
and use:
$ perl example-Follow_Hashref_Path.pl
stopped at path element end
full_path = node=>a=>b=>c=>end
path so far = node=>->{a}->{b}->{c}->{end}
stopped at path element A
full_path = node=>A=>b=>c=>end
path so far = node=>->{A}
stopped at path element B
full_path = node=>a=>B=>c=>end
path so far = node=>->{a}->{B}
stopped at path element C
full_path = node=>a=>b=>C=>end
path so far = node=>->{a}->{b}->{C}
stopped at path element a
full_path = node=>a=>b=>c=>end
path so far = node=>->{a}
Error: expected $node to be a ref HASH, instead got scalar undef
Error: expected $node to be a ref HASH, instead got scalar string-value
Error: expected $node to be a ref HASH, instead got scalar 42
Error: expected $node to be a ref HASH, instead got ref ARRAY
✓
$
**Another Example `($node->{a}->{B}->{c}//"premature end")`**
$ bash 1291 $> perl -e 'use warnings;my $node={a=>{b=>{c=>"end"}}}; print "followed path to the ".($node->{a}->{B}->{c}//"premature end")."\n"'
followed path to the premature end
$ bash 1292 $> perl -e 'use warnings;my $node={a=>{b=>{c=>"end"}}}; print "followed path to the ".($node->{a}->{b}->{c}//"premature end")."\n"'
followed path to the end
I admit that I have trouble keeping the binding strength of // in my head.
**Finally**
By the way, if anyone has examples of idioms using `//` and `->` that avoid the need to create library functions, especially for writes, I'd love to hear of them.
It's good to be able to create libraries to make stuff easier or more pleasant.
It is also good not to need to do so - as in `($node->{a}->{B}->{c}//"default")`.
Later:
At Stack Overflow: @mp3:pointed out that fat arrow/comma can be a terminator, e.g. (a=>b=>c=>). Doesn't help much in general when you have multiple chains, or to separate keywords in follow(path$node=>a=>b=>c=>"default",want_keyword=>1), but looks not-so-bad for Path(a=>b=>c=>).Inspires EVIL PERL TRICK print Do => You =>Like=>Barewords=>
May not want to be associated with such evil.
I have often thought that the reason that we don't actually use Perl as our interactive shell like bash is that bash defaults to barewords, whereas Perl usually requires quotes.
Methinks that it should be possible to create a single language that with the same keywords and operators, that can be turned "inside out":
One mode where strings require quotes:
var a ="string-value"
a second mode where things are string by default, and it is the keywords and syntax that needs to be quoted (here by {}):
{var} {a} {=} string-value
The latter might be useful in literate programming. Same programing language constructs, just inverted. Although the embedded prtogramming language syntax might be most l;ike Perl interpolation - might need different quotes for code producing a value within the text, and code operating on the text./
The minimal aspect of command line shells, for the most part, is a hybrid: the first word on a line is special, a command - everything else is strings by default.
Sunday, September 11, 2016
IDEA: Enhanced Timeouts to Ramp Down and Ramp Up Treadmill (watchapp)
This is not a complaint! I am the happy owner and user of a TR1200-DT3.
I just want to make a suggestion that you might consider for an enhancement.
CC'ing workwhilewalking.com since they are the reason I both the LifeSpan treadmill and one of their electric desks - even though you don't allow them to sell over web.
From the TR1200-DT3 Under Desk Treadmill Manual
Intelli-Guard™ Walk confidently knowing your safety is assured with Intelli-Guard™. Step away from your treadmill for more than twenty seconds and your treadmill’s belt will automatically glide to a smooth stop.
Intelli-Step™ Never miss a stride with Intelli-Step™. Your steps are automatically calculated with meticulous precision,displaying immediate feedback and historical trends via your Club account.(OK, a minor complaint: the LifeSpan Club and apps are pretty useless. But that's okay, wearing a FitBit on my shoe works.)
Problem:
Intelli-guard stops 20 seconds after I step off treadmill, if walking faster than 1mph.
Well, up until now this was never a problem - but all of a sudden I have started being able to walk 1mph and faster, while working on my PC. E.g. while writing this email. (And I am regularly getting 30-40K steps a day, counted by FitBit, and feeling great!)
The problem: I often step away for a bit. E.g. when doorbell rings, or for a biobreak, or to get a cup of tea (typically while a slow build is going on)
When I get back, Intelli-guard has stopped the treadmill. Whereas in past, when walking slowly, it would stay running, and I would just hope back on and resume walking while working.
That's not a problem so much that I often forget to start the treadmill up again, thinking "oh, I will just make a quick change and rebuild".
... eventually I realize that I have been standing without using the treadmill for 2 hours. Knees and back aching, etc. (Walking on treadmill is much less stressful than standing - and better exercise as well.)
I have tried disabling the Intelli-guard timeout (that's hat I am doing now), and/also resuming at last speed. Neither is satisfactory - especially not when I realize that I left the treadmill running overnight.
Suggestion:
1) Preferences for timing out the treadmill. E.g. not 20 seconds, but configurable, up to an hour or so.
2) Two timeouts:
2.1) Timeout #1 ramps the treadmill down from speed in use to some slow default, like 0.4mph.
2.2) Timeout #2 stops treadmill after an hour or so, to avoid wasting power.
These two timeouts should be easy.
For extra credit:
3) Recognize when user is back, and ramp back up.
3.1) e.g. if in the "slow mode", you can recognize that the user is back by noticing the steps.
3.2) extra credit, a sensor - visual, or bluetooth with phone or watch, or ...
Ramping up might be a safety challenge. Best if you can query user, and ask "do you want to ramp back up to speed".
Where to query?
- existing primitive console
- fancier console (you probably have)
- phone app (oh, no, I hate phones, now that I use a ...)
- watch
- could be as simple as a text or other push
- "Do you want to ramp up treadmill?"
- that user can reply to (some watches can, e.g. Apple or my Pebble, apparently not a FitBit yet.
- might be a no-reply notify / text
- "I am about to ramp up. Press stop while ramping up to stop at that speed..."
- or you can write a watchapp
- but that gets you into the losing game of "which watch?" And you thought Android/iPhone was a pain.
- IMHO watches are a great way to control stuff like this
- more personal than a phone
- much harder to set down
- IMHO a text (or better, a secure messaging app, if there is a standard) is the way to go. Gets you watch and phone. and also PC. With or without reply from device sent to.
BTW - although SMS texts are not secure - you would not want a hacker to be able to remote control a treadmill, that could be severe injury - you can send authenticated messages between LifeSpan treadmill and device.
BlueTooth, of course - probably BT/LE - is more secure.
Once again: I am a happy user of my treadmill desk.
But I am an engineer, and always want to improve. (Worse, a computer architect)
---+ LOW PRIORITY
FYI I have corresponded with you in past about
flakey boot (looks like power sequencing)
- y'all offerred to have me send in the console for a fix
- but I never got around to it - too much hassle, and what I have works - don't want to give it up.
- and transferring LifeSpan data to somewhere, anywhere,
- like FitBit
- not useful - I already have a FitBit, sometimes 2
- one on my feet that counts while on treadmill, since hands typing don;'t count (on shoes I keep by treadmill for use on treadmill, clean, not used outside)
- wristband, that counts when not using treadmill
- FitBit can handle multiple FitBit stepcounters, but not non-FitBit stepcounters
- or Apple Health
- again doesn't understand multiple stepcounters
- or my own spreadsheet
- I can certainly handle / record multiple stepcounters
- I can even reconcile, so long as I have fine grain counts, e.g. per minute
- even if clocks are out of synch (time warp)
Saturday, September 10, 2016
Perforce: "Git ignore syntax is fully supported" - NOT :-(
---+ BRIEF
Posting in the hope of saving somebody else the trouble of figuring out that Perforce does not support full wildcards in P4IGNORE, even though they say "Git ignore syntax is fully supported".
Mildly annoyed.
But also amused: to work around Perforce's limitations, I did
But also amused: to work around Perforce's limitations, I did
@p4sven heck!: echo {a,b,c,d,....}{a,b,c,d,...} >> .p4ignore Had to shrink to fit tweet. wc .p4ignore now 1418 lines. Had to shrink to fit tweet.
Combinatoric explosion is so much fun. Should I go for three characters ???
---+ DETAIL
---+ DETAIL
I hate it when ... somebody, whether the help file, SW developer blog, or ads ... say that "something is fully supported", when it is not.
I was very excited to see that Perforce might support .gitignore-comparable wildcards. I wasted any hour proving that, while P4V 2015.2 does support some .gitignore wildcards, it does not support all, specifically not the ? single character wildcard, or patterns such as ?? for 2 character filenames, etc.
@p4sven I won't bother you any more. Only did because you said "Git ignore syntax is fully supported". Wrong! ? wildcards are not
P4IGNORE: Ignorance is Bliss | Perforce: "Git ignore syntax is fully supported"It is okay to say "Perforce supports some or most .gitignore syntax, to the point where you can share .gitignore files with projects using git, and still use Perforce if you want to." - this is fine - honest, and valuable even if not complete.
'via Blog this'
Even better if you can say what you do not support. So I don't waste time figuring out what isn't properly supported.
But to say that you have full support when you do not WASTES MY TIME.
Worse, git is Open Source, it probably has test suites for .gitignore that Perforce could have looked at. (Hmm, could that be a license violation? Don't use the code, just extract some test patterns...)
Worse comes to worst, some random file patterns could have been run on both p4 and git.
---
Why ? ??
I have a habit of creating files with single character filenames when doing emacs text wizardry to create lists of files to work on (ironically, to work around Perforce limitations).
E.g. filenames like "a", "b". Also "a1", "a2".
Aside: Yeah, I have been bitten by "cc" and other two letter executables, back when I was doing OS releases for Gould and Motorola:
$ bash 1405 $> ls /bin/??
/bin/cp* /bin/dd* /bin/df* /bin/ed* /bin/ln*
/bin/ls* /bin/mv* /bin/ps* /bin/rm* /bin/sh*
Why not "tmp*"? Well, I use that too - but ? and ?? are shorter and easier to type.
---+ SEE ALSO
See also http://stackoverflow.com/questions/18240084/how-does-perforce-ignore-file-syntax-differ-from-gitignore-syntax
Thursday, September 08, 2016
Fat commas / fat arrows in series
I just got bitten by a bug caused by two fat commas / fat arrows in series:
Q: are there any good uses for fat commas in series?
I am surprised that there was not a 'use warnings' warning for this.
---
I like functions with keyword arguments. In Perl there are two main ways to do this:
Of course, the real problem is that Perl needs a proper syntax for keyword arguments.
Perl6 does it better.
---
For grins, some related code examples with 2 fat commas in series.
'via Blog this'
$ bash $> perl -e 'use strict; use warnings; my @v = ( a=> b => 1 )'actually in a function; actually in a constructor for an object (blessed hash), so I was thinking {} when it was new( a=>b=>1).
✓
$ bash $> perl -e 'Obviously I found the bug fairly quickly - but I would prefer to have had a compile-time error or warning rather than a run-time error.
use strict; use warnings;
sub kwargs_func{ print "inside\n"; my %kw = $_[0] ;};
kwargs_func( a=> b => 1 )
'
inside
Odd number of elements in hash assignment at -e line ##.
✓
Q: are there any good uses for fat commas in series?
I am surprised that there was not a 'use warnings' warning for this.
---
I like functions with keyword arguments. In Perl there are two main ways to do this:
func_hash_as_array_arg( kwarg1=>kwval1, kwarg2=>kwval2 )
func_hashref_as_scalar_arg( { kwarg1=>kwval1, kwarg2=>kwval2 } )
which can be mixed with positional in a reasonably nice way
func( posarg1, posarg2, kwarg1=>kwval1, kwarg2=>kwval2 )
func( posarg1, posarg2, { kwarg1=>kwval1, kwarg2=>kwval2 } )
and also in less nice ways
func( { kwarg1=>kwval1, kwarg2=>kwval2 }, varargs1, vargags2, ... )Although I prefer f(k1=>v1) to f({k1=>v1}) - less clutter - the fact that the hashref "keyword argument group" gives a slight bit more compile-time check is interesting. I may flip.
Of course, the real problem is that Perl needs a proper syntax for keyword arguments.
Perl6 does it better.
---
For grins, some related code examples with 2 fat commas in series.
$ bash $> perl -e 'use strict; use warnings; my %v = ( a=> b => 1 )'
Odd number of elements in hash assignment at -e line 1.
✓
$ bash $> perl -e 'use strict; use warnings; my $e = { a=> b => 1 }'
Odd number of elements in anonymous hash at -e line 1.
✓
$ bash $> perl -e 'use strict; use warnings; my $e = [ a=> b => 1 ]'
✓
$ bash $> perl -e '
use strict; use warnings;
sub kwargs_func{ print "inside\n"; my %kw = $_[0] ;};
kwargs_func( a=> b => 1 )
'
inside
Odd number of elements in hash assignment at -e line ##.
✓
$ bash $> perl -e '
use strict; use warnings;
sub kwargs_func{ print "inside\n"; my %kw = %{$_[0]} ;};
kwargs_func( {a=> b => 1} )
'
Odd number of elements in anonymous hash at -e line ##.
inside
✓
---
Not the same problem, but along the same lines: When a fat comma is confusing | Samuel Kaufman [blogs.perl.org]
'via Blog this'
Friday, August 19, 2016
Beyond Getopt: Doc + Test > CodeGen
For decades I have been automating generating code for boilerplate like command line parsing (getopt) and generating documentation for that code. E.g. circa 1985 in my first job I wrote tools to generate manpages from what looks very much like Perl POD (Plain Old Documentation).
I admit to joining the "test-first" and "test-driven" bandwagon later. Grudgingly at first - Bob Colwell at Intel/P6 said "if it doesn't have an automated test, it doesn't exist". With growing enthusiasm as I discovered Agile and refactoring circa 1996.
Yet for all of my love of automation (i.e. my laziness), I was never quite happy with getopt style functions, whether in C, C++, Perl, etc. Often the standard libraries did not parse options the way I want: E.g. typically I want options to be parsed strictly left to right, so that later options can override earlier options, so that you can create configurations and then customize. Except, e.g., when I want to disallow. E.g. often I want to have expression parsing as part of the options, so that you can say CPUSIM -cachesize=linesize*sets*ways -sets=4 ... E.g. I want to be able to have the same or related options parsers for command line, environment variables, and config files. E.g. I want modular, composable, options parsing - at Intel we called this "knobs" parsing, to the merriment of pommies - so that I can plug a cache model into the L2 position, and instantly get all of the appropriate options for that new L2 cache. And so on.
Plus, of course, I always want to automate the creation of docs and help messages, so that they are guaranteed to be kept in sync wth the parsing code.
Many getopts libraries only automate the code generation.
Some provide help and documentation - usually perfunctory. E.g. when you create the option, you may also provide a string saying what the option is for.
---
Today I tried something a bit different: I unified the documentation and the tests for command line arguments. The simple examples were tests, automated of course. More complicated tests were interleaved with the simple examples. Much more complicated outlined.
Rather like a literate programming approach. Later, I googled refs such as that attached below.
I am quite pleased with this approach.
---
In an ideal world one might automate all three: getopt codegen, help docs, and tests.
Although one then worries about common mode errors and single points of failure or untestedness.
But it turns out that it is not that hard adding the getopt code. Especially if too complicated, to many validation checks for regular getopt. It is the creation of the documentation and the tests that is a pain. If the tests are automated, the codegen is straightforward.
---
See related:
Getopt::Euclid - search.cpan.org:
Combining Literate Programming and Functional Unit Tests to Augment the Workflow and Documentation in Software Development Projects - DBIS EPub: "Combining Literate Programming and Functional Unit Tests"
I admit to joining the "test-first" and "test-driven" bandwagon later. Grudgingly at first - Bob Colwell at Intel/P6 said "if it doesn't have an automated test, it doesn't exist". With growing enthusiasm as I discovered Agile and refactoring circa 1996.
Yet for all of my love of automation (i.e. my laziness), I was never quite happy with getopt style functions, whether in C, C++, Perl, etc. Often the standard libraries did not parse options the way I want: E.g. typically I want options to be parsed strictly left to right, so that later options can override earlier options, so that you can create configurations and then customize. Except, e.g., when I want to disallow. E.g. often I want to have expression parsing as part of the options, so that you can say CPUSIM -cachesize=linesize*sets*ways -sets=4 ... E.g. I want to be able to have the same or related options parsers for command line, environment variables, and config files. E.g. I want modular, composable, options parsing - at Intel we called this "knobs" parsing, to the merriment of pommies - so that I can plug a cache model into the L2 position, and instantly get all of the appropriate options for that new L2 cache. And so on.
Plus, of course, I always want to automate the creation of docs and help messages, so that they are guaranteed to be kept in sync wth the parsing code.
Many getopts libraries only automate the code generation.
Some provide help and documentation - usually perfunctory. E.g. when you create the option, you may also provide a string saying what the option is for.
---
Today I tried something a bit different: I unified the documentation and the tests for command line arguments. The simple examples were tests, automated of course. More complicated tests were interleaved with the simple examples. Much more complicated outlined.
Rather like a literate programming approach. Later, I googled refs such as that attached below.
I am quite pleased with this approach.
---
In an ideal world one might automate all three: getopt codegen, help docs, and tests.
Although one then worries about common mode errors and single points of failure or untestedness.
But it turns out that it is not that hard adding the getopt code. Especially if too complicated, to many validation checks for regular getopt. It is the creation of the documentation and the tests that is a pain. If the tests are automated, the codegen is straightforward.
---
See related:
Getopt::Euclid - search.cpan.org:
Combining Literate Programming and Functional Unit Tests to Augment the Workflow and Documentation in Software Development Projects - DBIS EPub: "Combining Literate Programming and Functional Unit Tests"
Sunday, August 14, 2016
How to Start Wearing EyeGlasses in your 50s
---+ Summary
If you have had good vision all of your life and are only now starting to use glasses:
* Take advice, whether from people who wear glasses, or from vision care professionals (opthalmologist, optometrist, optician) with a big grain of salt. Most people in developed countries now wear glasses, so they are not used to issues people with formerly perfect vision may have on starting to wear glasses.
* Do NOT purchase progressive lenses. They are very disappointing for somebody who is used to be able to move eyes to focus anywhere without moving head.
* Purchase clear single vision driving glasses, if needed. Get clip-on flip-on sunglass filters, for use when driving into the sun; remove and use the clear glass at night, or possibly "fitover" sunglasses that go over your eyeglasses.
As of this date (Aug 2016) there is no satisfactory solution for a single pair photochromic tint-changing eyeglasses for driving, usable both day and night. If you need driving glasses both day and night, and you don't want clip-ons, you will need to buy two pairs, one day, one night.
* Purchase single vision reading glasses, if that's what you need.
I have not tried photochromic lenses. Sound good, maybe next time.
I have not tried bifocals or trifocals, burned as I was by my bad experience with progressives; they may be the 5th or 6th pair I try.
I am optimistic about variable focus lenses, but the market seems sparse and the range of vision restricted.
I like the largest possible lenses for the largest possible range of view. Extra large aviators or rectangular frames/lenses seem to be the largest available - 64mmx56mm.
People who have worn glasses for years often have several pairs, purchased as their prescription changed. New eyeglass wearers may not have this luxury. I recommend purchasing single vision of whatever prescription you need most (near/far/middle). Clear - certainly if for driving, probably for indoor reading.
Finally: I think vision is a field begging for new technology, like variable focus lenses. Although I do not know how much medical regulation constrains innovation. You need to come from not wearing eyeglasses to see how primitive eyeglasses are.
However, almost 4 decades of computer use have degraded my vision. I got my first prescription eyeglasses 2 years ago, and recently got two more pairs. My vision is holding steady - I wanted new glasses because I had bought the wrong types of glasses back then - and because my insurance plan would only buy me new frames once every 2 years. (I remain cheap.)
The purpose of this blog entry is to record what I learned about eyeglasses, and hopefully help somebody else avoid making the mistakes I made with my first eyeglass purchases.
This includes taking the advice of vision professionals - opticians, optometrists, and opthalmologists - under advisement as well. Not only is it likely they do not know realize how much of a change, but their advice may also be influenced by what sorts of glasses sell best.
(Aside: in an ideal world you would take several vision tests - so called refractions - at different times of the day, at different stages of fatigue. Also, Zeiss recommends Diabetics are recommended to have their eyes tested at different times of the day and to consult an ophthalmologist if necessary and also notes that Zeiss' "objective refraction tests" can determine whether your night vision needs different correction than your day vision. However, your opthalmologist-optometrist-optician probably doesn't have the necessary equipment for an objective vision test, and can only give you a subjective vision test. Also, your insurance will probably not pay for multiple tests - mine does not.)
So, "progressive lenses sound good - one pair of glasses that could satisfy all of my needs?"
BZZTTTTT!! WRONG!!!! At least for me.
What I had not realized - what I probably should have realized - is that with progressives only a very small region is in focus. When reading a book, often only one or two lines of text were in focus. Sometimes less than a full line of text, for large fonts: the bottom of an S might be in focus, but not the top. Furthermore, I often found that a line of text in a book might be in focus in my left eye, and in my right, but not right in the center. What I had not realized - what I probably should have realized - is that people who uses progressive are in the habit of moving their head left, right, up and down to focus on things. Whereas I, with my formerly good vision, was (and still am) used to moving my eyes and focussing near or far without needing to move my head
I was able to drive with progressives, but I had to tilt my head or adjust my glasses, and I often needed to tilt my head up or down to read road signs. It seems that people using progressive lenses are looking at the world through a narrow crack - which is actually quite worrying when you realize that you are sharing the road with them.
Overall, it seems that much of the marketing for progressives is vanity, hiding the fact that you are aging. If I had known this I would probably have tried bifocals or trifocals - I don't care about lines in my glasses - but I have not tried them yet. Cheapskate. Plus I like the largest possible range of vision.
My only complaint about these "driving glasses" is that they are sunglasses, and hence not useful at night. I don't regret getting all of the fancy sunglass features - polarized, glare reduction, etc.
I do regret not getting some form of tint adjustable photochromic glasses, like the Transitions lenses that used to be advertised so widely on TV (still may be - I watch very little TV nowadays). Or, at least I would regret not getting photochromic lenses - except that there is no perfect choice of photochromic lens for driving.
Back in 2014, most opticians warned that photochromic lens would not change tint inside a car, because the car windows filtered the UV that drives the transition. Rather defeats the purpose for driving lenses. My own research discovered some photochromic lenses that would work inside a car, whether extra-sensitive to UV, or dependent on less filtered wavelengths. But I played it safe.
Now, in 2016, photochromic driving glasses good for night and day have ALMOST arrived. E.g. see Transition's comparison chart. Their DriveWear brand "fully activates" "behind-the-windshield", but they forbid you to use DriveWear glasses at night, since they never turn completely clear. This would be worrying if you want to use such driving glasses mainly around dawn and twilight, in mountainous country with big, dark, shadows. The original Transitions Signature line and the Vantage line do not activate behind-the-windshield. The Transitions XTRActive line attains "moderate activation" behind the windshield, but lacks the polarization of the Vantage or DriveWear lines. All of the Signature, Vantage, and XTRActive lenses are recommended to drive at night (note that this was not apparent on the comparison chart - had to search for other marketing material). But only DriveWear is recommended for prescription sunglasses.
Friends (salt!) recommend photochromic far seeing outdoor glasses for use both inside and outside the car, with clip-ons or wearovers for use while driving.
You can also get photochromic "overglasses" - would not help in the car, but might help while on the water or snow.
I have not yet taken these friends advice, mainly because I have already spent my budget for the next couple of years, but also because (1) The photochromics that are most recommended for driving, e.g. the Transitions Signatures that turn fully clear, do not seem to be very good sunglasses; (2) the best Transitions sunglasses, the Vantage with variable polarization or the XTractive that are darkest, neither turn completely clear, and (3)
Photochromic lenses ... have been shown significantly to reduce light transmission compared to uncoated lenses, which may reduce the likelihood of identifying navigation lights. Seafarers should be advised not to wear glasses with photochromic lenses or glasses that are permanently tinted when undertaking lookout duties at night. (From the Norwegian Center for Maritime Medecine.)
Bottom line: my hope was vain, of being able to have a single pair of photochromic driving glasses, usable from night through sunrise/sunset to bright daylight near glaring ocean.
Friends (salt! salt!) recommend wear-over sunglasses - large sunglasses that can fit on top of your eyeglasses.
Since I am used to being able to move my eyes around, it helps to have the largest possible lenses. DEFINITELY UNCOOL.
Most of the frames my optician has available seem to be motivated by vanity: stylish glasses, with a restricted field of view.
Extra-large aviator lenses were the largest lenses I could find (more specifically, extra large aviator frames, lenses to match). I am mostly happy with these, except that they leave a big uncorrected gap at the bottom - exactly where I cast my eyes down to look at my keyboard, or to read a newspaper broadsheet, or to see what I am about to trip on in the woods. I only learned, when I was searching for clip-on sunglasses, that "Extra Large Rectangle" glasses have bigger lenses than "Extra Large Aviators", 64x56 vs 63x51. Next time...
If you have had good vision all of your life and are only now starting to use glasses:
* Take advice, whether from people who wear glasses, or from vision care professionals (opthalmologist, optometrist, optician) with a big grain of salt. Most people in developed countries now wear glasses, so they are not used to issues people with formerly perfect vision may have on starting to wear glasses.
* Do NOT purchase progressive lenses. They are very disappointing for somebody who is used to be able to move eyes to focus anywhere without moving head.
* Purchase clear single vision driving glasses, if needed. Get clip-on flip-on sunglass filters, for use when driving into the sun; remove and use the clear glass at night, or possibly "fitover" sunglasses that go over your eyeglasses.
As of this date (Aug 2016) there is no satisfactory solution for a single pair photochromic tint-changing eyeglasses for driving, usable both day and night. If you need driving glasses both day and night, and you don't want clip-ons, you will need to buy two pairs, one day, one night.
* Purchase single vision reading glasses, if that's what you need.
I have not tried photochromic lenses. Sound good, maybe next time.
I have not tried bifocals or trifocals, burned as I was by my bad experience with progressives; they may be the 5th or 6th pair I try.
I am optimistic about variable focus lenses, but the market seems sparse and the range of vision restricted.
I like the largest possible lenses for the largest possible range of view. Extra large aviators or rectangular frames/lenses seem to be the largest available - 64mmx56mm.
People who have worn glasses for years often have several pairs, purchased as their prescription changed. New eyeglass wearers may not have this luxury. I recommend purchasing single vision of whatever prescription you need most (near/far/middle). Clear - certainly if for driving, probably for indoor reading.
Finally: I think vision is a field begging for new technology, like variable focus lenses. Although I do not know how much medical regulation constrains innovation. You need to come from not wearing eyeglasses to see how primitive eyeglasses are.
---+ Details
In my youth I had extremely good eyesight. "Fighter pilot eyes", I was told when I was applying to Canada's College Militaire Royal (I chickened out because I was warned that my allergies might result in me not being accepted to pilot training, and because the IIRC 10+ year commitment - 4-5 years at CMR, 5 years after graduation, + 2 years extra as a pilot - seemed like a very long time way back then).However, almost 4 decades of computer use have degraded my vision. I got my first prescription eyeglasses 2 years ago, and recently got two more pairs. My vision is holding steady - I wanted new glasses because I had bought the wrong types of glasses back then - and because my insurance plan would only buy me new frames once every 2 years. (I remain cheap.)
The purpose of this blog entry is to record what I learned about eyeglasses, and hopefully help somebody else avoid making the mistakes I made with my first eyeglass purchases.
---++ Take Advice - Carefully
To start: take the advice of people who wear glasses with a very large grain of salt. Something like 75% of Americans wear glasses or contacts, many of them since childhood. Most of them do not remember what it is like to have perfect uncorrected vision - and certainly not "more than perfect" vision. (IIRC I had better than 20/20 vision when I was young - 20:10 and 40:20.) They probably do not realize how much of a change it is to have to start wearing glasses.This includes taking the advice of vision professionals - opticians, optometrists, and opthalmologists - under advisement as well. Not only is it likely they do not know realize how much of a change, but their advice may also be influenced by what sorts of glasses sell best.
---++ Progressives - Just Say No(to Vanity
My big, expensive, mistake was thinking that "eyeglasses are an old, established technology", and figuring that I would get the best glasses that my insurance company would pay for. This seemed to be "progressive" lenses,true "multifocal" lenses that provide a seamless progression of many lens powers for all viewing distances.I really need reading glasses for close in, but I had noticed occasional, rare blurry vision, usually when tired, both using my computer (display about 2.5 feet away from my eyes when using my treadmill desk, a bit closer when sitting), and at night or during bright sunlight, e.g. sunrise or sunset. My vision test revealed a slight astigmatism that, when corrected, helped my distance vision.
(Aside: in an ideal world you would take several vision tests - so called refractions - at different times of the day, at different stages of fatigue. Also, Zeiss recommends Diabetics are recommended to have their eyes tested at different times of the day and to consult an ophthalmologist if necessary and also notes that Zeiss' "objective refraction tests" can determine whether your night vision needs different correction than your day vision. However, your opthalmologist-optometrist-optician probably doesn't have the necessary equipment for an objective vision test, and can only give you a subjective vision test. Also, your insurance will probably not pay for multiple tests - mine does not.)
So, "progressive lenses sound good - one pair of glasses that could satisfy all of my needs?"
BZZTTTTT!! WRONG!!!! At least for me.
What I had not realized - what I probably should have realized - is that with progressives only a very small region is in focus. When reading a book, often only one or two lines of text were in focus. Sometimes less than a full line of text, for large fonts: the bottom of an S might be in focus, but not the top. Furthermore, I often found that a line of text in a book might be in focus in my left eye, and in my right, but not right in the center. What I had not realized - what I probably should have realized - is that people who uses progressive are in the habit of moving their head left, right, up and down to focus on things. Whereas I, with my formerly good vision, was (and still am) used to moving my eyes and focussing near or far without needing to move my head
I was able to drive with progressives, but I had to tilt my head or adjust my glasses, and I often needed to tilt my head up or down to read road signs. It seems that people using progressive lenses are looking at the world through a narrow crack - which is actually quite worrying when you realize that you are sharing the road with them.
Overall, it seems that much of the marketing for progressives is vanity, hiding the fact that you are aging. If I had known this I would probably have tried bifocals or trifocals - I don't care about lines in my glasses - but I have not tried them yet. Cheapskate. Plus I like the largest possible range of vision.
---++ Adjustable Focus Eyeglasses - in My Dreams
Aside: true variable or adjustable focus eyeglasses are available, from the likes of Adlens, Superfocus, Empowered, Adspecs and Eyejusters. Of these, only Adlens seems to be widely available. Superfocus and Empowered seem to have gone out of business. Adspecs and Eyejusters seem to mainly motivated by charity, making eyeglasses available to poor countries, with "Buy One Donate One" offers. I tried several of these after I realized how useless the progressive lenses were - several of them are much cheaper than regular glasses, especially without astigmatism correction. Overall, they work, but are not great (for this pampered westerner). Small field of vision. Plus, one of them is only one time adjustable - you have to break off a plastic tab preventing you from adjusting later. A hacker.acquiantance, "Ches" (Bill Cheswick), liked his Superfocus glasses enough to buy a second pair, but was not wearing them when I last saw him at a workshop. I might have purchased them myself - I am often a gadget-happy too-early-adopter - but they were no longer for sale. At the moment, Adlens appears to be main player in this market. I tried their Alvarez lens - small field of view. Inventors and startup companies with new technology pop up regularly in the news, but peter out.---++ Driving Glasses, Sunglasses, PhotoChromic => Uncool Dad
When I purchased the unsatisfactory progressives I also purchased single vision sunglasses, for driving into the sun and spotting while on the water. These have been satisfactory. I don't need them much, but they help when necessary. I have been needing them more and more - ah, the joys of age!My only complaint about these "driving glasses" is that they are sunglasses, and hence not useful at night. I don't regret getting all of the fancy sunglass features - polarized, glare reduction, etc.
I do regret not getting some form of tint adjustable photochromic glasses, like the Transitions lenses that used to be advertised so widely on TV (still may be - I watch very little TV nowadays). Or, at least I would regret not getting photochromic lenses - except that there is no perfect choice of photochromic lens for driving.
Back in 2014, most opticians warned that photochromic lens would not change tint inside a car, because the car windows filtered the UV that drives the transition. Rather defeats the purpose for driving lenses. My own research discovered some photochromic lenses that would work inside a car, whether extra-sensitive to UV, or dependent on less filtered wavelengths. But I played it safe.
Now, in 2016, photochromic driving glasses good for night and day have ALMOST arrived. E.g. see Transition's comparison chart. Their DriveWear brand "fully activates" "behind-the-windshield", but they forbid you to use DriveWear glasses at night, since they never turn completely clear. This would be worrying if you want to use such driving glasses mainly around dawn and twilight, in mountainous country with big, dark, shadows. The original Transitions Signature line and the Vantage line do not activate behind-the-windshield. The Transitions XTRActive line attains "moderate activation" behind the windshield, but lacks the polarization of the Vantage or DriveWear lines. All of the Signature, Vantage, and XTRActive lenses are recommended to drive at night (note that this was not apparent on the comparison chart - had to search for other marketing material). But only DriveWear is recommended for prescription sunglasses.
Friends (salt!) recommend photochromic far seeing outdoor glasses for use both inside and outside the car, with clip-ons or wearovers for use while driving.
You can also get photochromic "overglasses" - would not help in the car, but might help while on the water or snow.
I have not yet taken these friends advice, mainly because I have already spent my budget for the next couple of years, but also because (1) The photochromics that are most recommended for driving, e.g. the Transitions Signatures that turn fully clear, do not seem to be very good sunglasses; (2) the best Transitions sunglasses, the Vantage with variable polarization or the XTractive that are darkest, neither turn completely clear, and (3)
Photochromic lenses ... have been shown significantly to reduce light transmission compared to uncoated lenses, which may reduce the likelihood of identifying navigation lights. Seafarers should be advised not to wear glasses with photochromic lenses or glasses that are permanently tinted when undertaking lookout duties at night. (From the Norwegian Center for Maritime Medecine.)
Bottom line: my hope was vain, of being able to have a single pair of photochromic driving glasses, usable from night through sunrise/sunset to bright daylight near glaring ocean.
---++ Low Tech
So now I have the old prescription sunglasses I bought two years ago, and a new pair of non-tint driving glasses in case I need them when driving at night. Suboptimal: expensive, and a hassle. I have also purchased "clip-on flip-up sunglass lenses that may permit me to carry only the single pair of driving glasses around. My teenage daughter already thinks I am totally uncool.Friends (salt! salt!) recommend wear-over sunglasses - large sunglasses that can fit on top of your eyeglasses.
---++ Lens Size - the larger, the better to see with!
Which brings me to my next to last point: lens size.Since I am used to being able to move my eyes around, it helps to have the largest possible lenses. DEFINITELY UNCOOL.
Most of the frames my optician has available seem to be motivated by vanity: stylish glasses, with a restricted field of view.
Extra-large aviator lenses were the largest lenses I could find (more specifically, extra large aviator frames, lenses to match). I am mostly happy with these, except that they leave a big uncorrected gap at the bottom - exactly where I cast my eyes down to look at my keyboard, or to read a newspaper broadsheet, or to see what I am about to trip on in the woods. I only learned, when I was searching for clip-on sunglasses, that "Extra Large Rectangle" glasses have bigger lenses than "Extra Large Aviators", 64x56 vs 63x51. Next time...
---++ Purchase Strategy
Don't do what I did: don't jump to expensive glasses right away, especially not progressives (which I did mainly hoping to get away with having to track only a single pair).
Friends (salt! salt! salt!) recommend purchasing cheap clear glasses, typically on the web. 20-40$ seem ballpark.
You may use all your vision insurance benefits - depending on insurance, you may not be able to access unused dollars.
The cheaper glasses may not get the largest lenses :-(
The cheaper glasses may not be scratch and break resistant.
But you can experiment, and figure out what works.
Subscribe to:
Posts (Atom)