Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Thursday, May 11, 2017

Review of my Surface Book with Performance Base

Overall I like my SurfBook, my Surface Book with Performance Base. But ...


I purchased it in late January, but delivery was delayed until late February.  I did not really start using it until late March, and only made it my main machine in April, replacing my old MacBook Pro Retina 15" mid-2014. Reason for delay in starting to use: projects at work - the delay in shipment missed a window of opportunity.

I suspect that delivery was delayed because I wanted the 1TB SSD.  To be honest, I actually wanted to purchase an ordinary, non-book Surface with a 1TB SSD, but that seems not available. :-(


Overall

Overall, I am happy - I rate it 4.5 out of 5.

Happier than I was with my MacBook

I love convertible touch tablet.

I hate the hinge. The hinge scares me.

I hate the fact that the pen keeps falling off.


Pre-Purchase Rationale

Why 1TB?
  • This is my main machine. I am not a big streaming video user or anything like that, but I do play around with OSes:
  • I am currently using 461MB, Windows, Cygwin, and I have barely started, not fully installed, Ubuntu / Windows Subsystem for Linux
  • On my MacBook I had 756MB in use. Much of that was the Parallels VM to run Windows apps like FrameMaker.  I was able to reduce dramatically, when I migrated. 
  • Nevertheless, buying a new laptop with a fraction of the diskspace seems retrograde, a time-waster
  • I hate disk wars
Why 1TB SSD?
  • Do I really need to explain?
  • I did consider non-SurfBook convertibles, some with 1TB rotating disk.  I could not find a reasonable hybrid configuration with a "large enough" SSD cache.
Why switch away from MacBook?
  • No touchscreen for MacBook. I love touch, I love tablet, I love pen. I considered the MacBook with TouchBar, but it is not big enough, and apparently not easily customizable 
  • Work makes me use Windows, for FrameMaker in particular.  Using the Parallels VM was always a hassle.
  • Similarly, Microsoft Outlook runs better on Windows, even though available on Mac and iPhone. Features such as conversation mode are only fully supported on Windows.  
  • I will miss MacOS being a real Unix-family OS.  Historically I have used Cygwin on Windows, but within the first few days it was obvious how much slower Cygwin was for things like starting shells than MacOS. Hence my interest in Ubuntu / Windows Subsystem for Linux, although its unsupportedness is a worry.
Why not Linux? ChromeBook? etc 
  • I want touchscreen/tablet. I like pen.
  • Windows definitely seems to be the leader in convertible laptop / touch tablet.  Especially Microsoft Surface and Surface Book, but also other Wintel manufacturers like Dell and HP.
  • ChromeBook not available as a convertible tablet, AFAIK.  Who wants a touchscreen clamshell that cannot act as a tablet?
  • Uncertain how good Linux support for touchscreen convertibles is.  I will probably try when this SurfBook nears EOL.
Why not an iPad?
  • I frequently use my machine without network connectivity. It must be freestanding.
Why tablet / convertible?
  • I really, Really, REALLY want a portable computer that I can do use for real work on an airplane in economy.  (Not a problem for rich people who can fly non-sardine classes.)
  • The problem is the touchpad, which adds 3-4 inches of mostly unnecessary depth.
  • Certainly not my 15" MacBook.  Even 13" clamshell not so good.  
  • Whereas with a tablet I can fold away the keyboard, and get stuff done.  Screen keyboard not as nice as real keyboard, but can do a lot just by touch and pen.
  • I also sometimes use a tiny separate keyboard on plane, with Surface or SurfBook in tablet mode.
  • BTW, the non-book Surface is, surprisingly, not so good on plane. Kickstand and touchpad on cover take up too much depth. Have tried touchpadless keyboards with a slot to hold Surface...
Why touch?
  • I have long used "GUI Extenders" to increase my (in)efficiency with apps like Outlook email. E.g. keyboard and mouse shortcuts, and systems of menus and buttons, for commonly used commands. Sometimes joystick and game controller shortcuts.
  • On MacBook I used apps such as Quadro, which allow an external iPad to provide touchscreen buttons for MacOS.  I also wrote my own (Python AppleScript), using Duet Display to give me a touchscreen for my MacBook.   It was a pain to have to deal with the external iPad - more to carry. Plus, although I liked Quadro, it was obviously consumer grade software, not power user friendly.  No version control. No diff. Etc.
  • I have written AutoHotKey gUIx SEBP (Graphical User Interface eXtender, Self-Editing Button Pad).  Does most of what I used Quadro for, plus is real software that I can manage.  And the touchscreen is always present.
  • Right now, I am working while walking on my treadmill desk, with my SurfBook in outward facing mode (clamshell, screen reversed) on a tray above my keyboard, with 3 external monitors.
  • I also use my ahk-guix-sebp on the Surfbook by itself. I split the screen, with Outlook occupying 80% of the width to the left, and my touch buttonpad on the right few inches.
External Monitors:
  • Did I mention that my MacBook could only handle two external monitors?   While my SurfBook can handle 3 external monitors, just like as well as the Thinkpad touch that I used before the MacBook. (30" 2560x1600 mini-DP, with 2 1200x1920 24" on either side using USB display adapters; + SurfBook LCD 3000x2000 between keyboard and 30")
  • This was a surprise - I would have expected Apple to be better at multi-monitor support.

And now, the rest of the review story

Most of the above were my pre-purchase rationale, with some minor feedback on usage.

Review after almost a month of use.

+ SurfBook is good on plane
   + First time I have been able to empty my Outlook Inbox on a plane in years

+ Touchscreen button pads work nicely
   + Using AHK (AutoHotKey)

- I still hate the Surface Book hinge, that does not close completely
  - I am constantly worried that it will get crushed in backpack.

+ I like the fact that the SurfBook hinge works without kickstand
   + I constantly use sitting in a chair with clamshell on my knees, in situations where the kickstand Surface was inconvenient
   + A few days later: spent much of the day sysadminning my daughter's non-book Surface Pro 3. Drove home how it is nice to  adjust the screen angle on the SurfBook.

- Cannot detach display from power/base when power is low or off

   I have come to hate and fear the following error message, when I try to detach the SurfBook screen to reverse it:
Tablet Battery is LowPlease charge the battery now and try detaching later.
(which I would clip and insert, except Google Blogger won't let me - I think because of system font size) 
This usually happens when I have been using the SurfBook as a clamshell laptop at kitchen table, and then want to plug in at treadmill - since I use in reverse clamshell at treadmill.

Or it happens when I have been using as a tablet, folded over on top of keyboard, and I want to switch back to being able to use as a laptop.

Or it happens when I have run out of battery, and want to plug it in to use it.

Moral: I often want to flip display when power is low, but cannot detach when power is low.

I almost never use the display detached without the keyboard attached. 

But Microsoft seems to assume that the only reason to detach is to use keyboardless.  Not just to reverse the display.

- When using in tablet mode, I tend to prefer landscape to portrait. And I prefer to use with keyboard attached. With hinge away from my body (because of its thickness), display edge opposite hinge resting against my body. But the SurfBook's two edge buttons live on the edge that presses into my body - power, and whatever I have bound volume up/down to. You can imagine the problems - powering off by accident, etc.  
+ Not a problem if keyboard / power base detached :-).
- But then, I have to carry my backpack around. :-( 
- When the "power base" / keyboard is detached, I worry that the male connector "prongs" are exposed and likely to break, since at a funny angle. Likely to catch on something and break (and then MS will blame me, rather than their industrial designer)
+? Overall, I think this detachable hinge might work almost when left at a desk
- But it doesn't work for me, since I am constantly carrying it between home and work.

- As everyone knows, Windows 10 "tablet mode" GUI is suboptimal. Very poor use of pixels
   - also, not available when connected to multiple displays
   - oddly enough, I think that I most want tablet mode when connected to multiple displays
   + I have bought a 3rd-party tiling window manager...
- But the Windows "desktop mode" is really bad for touch
   - buttons are far too small, far too likely to mishit
+ My ahk-guix-sebp buttonpad helps me use the touchscreen in desktop mode
- but it would be nicer not to have to write code for everything I want to touch in desktop mode
+ At least I *can* use touch mode in clamshell / laptop physical configuration

+ I like the SurfBook magnetic pen
- I dislike that pen is constantly falling off.  E.g. when in my backpack.
- I usually find it after a few days, but seldom have it when I want it.  E.g. on airplame.
- The stick-on loops that came with the pen for my daughter's non-book Surface also failed after a few months.
- I wish that it had a physical dock or slot for the pen.  Possibly in the space left because the frigging hinge doesn't close?

+ I use clamshell laptop mode a lot (display facing keyboard)
+ I use reverse clamshell mode a lot (display facing away from keyboard, angled circa 75 degrees)
+ I use tablet clamshell mode moderately (display facing away from keyboard, closed to hide keyboard)
   - see note about power button getting hit by accident
- I almost never use "detached" screen from powerbase/keyboard

- I dislike Windows Hello face recognition login.  I do not trust it. In fact, I disable the cameras in the BIOS and tape them over.
- I prefer fingerprint, like on the old Surface Pro 4 cover, or on my iPhone and newer Macs.
- But all biometrics are problematic, security risks. Fingerprints can be lifted. Face recognition can be faked out by photos or masks.
- I want fingerprint as an easy way of keeping myself logged on - e.g. set a short timeout to locking the screen, that can be unlocked with a fingerprint.  But I want to be required to enter a longer password at least once a day, or every few hours.
- Face recognition could be used same way, even less intrusively than fingerprint. But the face recognition camera can be used to spy on you.  Not just a privacy risk - it can probably see enough muscle movement to infer your password.

- Similarly, microphones are also a security risk. They can also infer keystrokes, e.g. passwords.
- I started off disabling both cameras and microphones. I disable Cortana voice recognition.
- But I have grown to like some MacOS "say" commands, integrated in my shell. Mainly, to alert me when a long running shell command has finished.
- Unfortunately, SurfBook BIOS cannot enable speaker output and disable microphone input. (I know, any speaker can also be used as a microphone. But I would still like separate disables. And HW that did not allow input from speaker.)

Irony: I have one of the earliest patents on the webcam, but I would prefer my computer not to have one. I'll go further: laptops and tablets should not have cameras and microphones: leave that to the phone. Phones should have Faraday cage cases that block sound, light, and EM.






'via Blog this'

Yes, really, I did (predict IME security bugs)

I usually try to resist the temptation to say "I told them so".  Saying "I told you so" doesn't make you popular with the people you told.  It is better to move forward and get things done, with the cooperation of those people, than it is to be proven right but shunned.

But in th case of the Intel IME / IAMT / ARC embedded CPU inside the master CPU/chipset, I am going to give in.  Because, yes, really, I did predict such security bugs. 

Darn...  I wrote a moderately long diatribe.  But loxt it because "Blogger is not a content management system, and does not proviode veersioning."  (I could swear blogger used to version - perhaps I am misremembering Google docs.)


Briefly:

Many folks believe that an "assistant" embedded processor is the answer to every problem in computer architecture: I/O; DMA; block memory copies; ... manageability; security.

Problem: 

The "assistant" embedded processor often has a less evolved security architecture than the "master" CPU.  Processor hardware, OS, SW.   Often the reason that the assistant embedded processor is more efficient is that it has thrown out advanced security. E.g. virtual memory page protection. E.g. privilege levels.

Oftentimes, the people advocating such "assistant" embedded processors say "It doesn't matter, we are only using it in narrowly constrained ways.  We can prove the code secure."  Yeah, right.

Over time, such "assistant" processors have added more and more security features.

But even when the assistant embedded processor has security features comparable to the master CPU, it still sucks that it is different.

By construction there will be more code in such a system to code review and security audit.  Even if functional code is the same, there is boot code.

Worse, the assistant and master may have comparable but slightly different security models.  E.g. one may have page permissions RWX-RW-RX but not execute only or read-only, the other may have all the relevat permissions RWX-RW-RX-R-X. Just a small difference, but it can matter.

Worse yet, there is a limited supply of experts for code reviews and security audits.  Even for one system, typically the master.   Even more so if they must be expert in both master and assistant processors.  Moreover, even if you can find such experts, it is damned hard to keep both of them in your head at the same time. It is common to say "oh, that's okay on x86 but not on ARC (or MIPS, or ARM, or your favorite embedded processors)", and later realize that you have it wrong.  Or vice versa.

Worse yet if simuilar but slightly different OSes and SW librraies are available on master and assistant.

Why heterogenous, given risks?

Given the security risks of heterogenous systems with different master and assistant processors, why do them?

Apart from the "my embedded processor is inherently more efficient than your master CPU" wars - which are usually wrong, although possibly true in certain instances (certainly for specialized I/O processors such as GPUs and DSPs) - the biggest reason to have embedded processors separate from the master CPU is:

OS independence.

If the embedded processor provides a high-enough-level API that it can be used without substantial modification for many different OS clients, then you may have less code to maintain (and code review, and security audit).

Furthermore, the very functions that you wish to provide using the assistant embedded processor may need to be protected from the OS.  E.g scanning for malware.

Plus, in a minor way, the fact that the assistant embedded processor is running a different ISA than the mass-market main x86 CPU makes it a little bit les likely that Joe Random Cracker has the skillz to break in.  But I would not count on that.  (Besides, you can get ISA encoding diversity whike preserving ISA semantic consistency. See below.)


Mitigations:

Dedicate parts of Multiprocessors: I like the way that IBM mainframes do manageability: in an LPAR, typically one of the mainframe CPUs is reserved to run the manageability tasks.  Same CPU architecture.  Sometimes even the same OS architecture, although running with hypervisor (MVME) privilege.

You could have different microarchitectures with same ISA and privilege architecture.   But many companies cannot afford to create such a diversity of microarchitectures, while customers may want to license only one instance of the popular but most expensive. (IMHO licensing should encourage such consistent, rather than discourage.)

Shared memory is a security hole:   We live in a world of web services, SAAS (Software As A Service) - get used to it. If the heterogenous "assistants" in your SOC interface via message passing, or even as if over a local TCP/IP network (hopefully a high performance, NATed, local network) - well, at least you are in the range of things that we should know how to secure.  Even though we often fail.

Trouble is, this only works for services that can be decoupled via message passing.  It doesn't work for a service that requires access by the assistant embedded processor to master CPU memory, e.g. for virus scans of active DRAM, or for DRAM deduplication.  But it can work well enough for things like virus scanning of data as it flows from NIC to ME to CPU. If it flowed that way...

Message passing not fast enough?  We know how to make it faster.  Zero copy tricks.   Direct modification of master CPU page tables.   Hardware TLB shootdown mechanisms.  Easier when all parties have similar virtual memory architecture.
      Trouble is, most modern CPUs can only vaguely hope to accelerate message passing interfaces - via a "Smart" embedded assistant processor.
      In some ways I advocate standardizing message passing, possibly using risky shared memory tricks - so as to reduice the need to use such shared memory tricks to interface to other "Smart" devices.

Even higher performance message passing directly accessing processor registers.   But that is even harder to do heterogenously, although I can imagine APIs.

But...  the infamous Intel AMT bug occurred because the "smart" embedded processor was running a poorly secured webserver.  Yep... there's no helping it.   Although we should know how to secure a webserver, we keep f**king up.
      But...  a bug in an SOC embedded webserver woud not have been so bad if isolated to a subsystem that could only access 1 NIC in a multi-NIC system. It is the fact that the subsystem that was compromised has such global access, to everything, more than an OS or hypervisor - that is the really bad thing.

IMHO the Principle of Least Privilege is not just a good idea - it should be the law.  As in, if a company designs a system with blatant disregard for the PLP, and if somebody sustains losses as a result of a security compromise, then they should be liable. And more.
     All of these mitigations are really just ways of saying "Principle of Least Privilege".
 
If you have to allow the "smart" assistant processors shared memory access to master CPU memory: Hardware level firewalls. Standard control and status register space and enumeration, so that you can verify that they are correctly configured.   Unchangeable hardware IDs as well as software IDs on bus transactions, so that you can easily create invariants like "the network packet filter is never allowed to directly access the audio microphones".  (Of course, you might want to allow the NIC to access the microphones for ultra-low-power voice conferencing. And the packet filter, for NSA-like stuff.)

Hardware Page Protection Keys:  associated with physical addresses, or at least the addresses seen on the bus. In addition to virtual address based protection.

Memory Encryption:  may not prevent access by malware running on an embedded processor to CPU storage, or vice versa.  But may prevent secrets leaking, and prevent malware crafting attack packets, whether code or data, to compromise further.\

I suppose that I should also mention capability systems for privilege management, both OS/SW, but possibly also hardware to shared memory. Fine grain.   But I butted my head against that wall, most recently with what became Intel MPX.

Conclusion

"Smart" assistant embedded processors have security risks.  

We really want to prevent "smart" assistant embedded processors from being "smart-ass".  Or, worse, "evil genius".




Intel IAMT bug: strncmp(trusted,untrusted,strlen(untrusted))

Intel's embarrassingly negligent IAMT bug seems ... easy to imagine how it happened.

Embedi's analysis shows that the bug is in code that looks like
if( strncmp(computed_response, user_response, response_length) )
    exit(0x99); 
using the user_response length to limit the length of the string comparison, rather than the expected length (which I believe is, in this case, constant 16, 128-bits, the length of the MD5 hash used in Kerberos authentication).

Immediately I thought of
strncmp(computed_response, user_response, strlen(user_response))
which inspired the riff below

Pay no attention to the riff below

Embedi's writeup indicates that the user_response is actually
struct AUTH_HEAD_VALUE {
    char *str;
    int len;
};
which probably invalidates any supposition that strlen may have one time been involved.

Nevertheless, the riff is fun, and although inaccurate in detail, probably has some aspects of truth.

Riffing on strcmp(trusted,untrusted,bad_length)

When I heard and saw
strncmp(computed_response, user_response, response_length)
Immediately I thought
strncmp(computed_response,
    user_response, strlen(user_response))
I imagined the original code was
strcmp(computed_response, user_response)
I guessed that some "security audit" might have said
Security Auditor: strcmp is insecure, this code must be changed to use strncmp
Might not have been a human security audit.  Might have been a secure-lint tool.

---

The programmer who made the change to use strncmp looked around for a size_t to use as the maximum length to compare.  If the strings were "naked" null terminated C strings, he may just have guessed wrong, choosing the second rather than the first.

 Embedi's writeup indicates that the user_response is not a naked C string, but is actually
struct AUTH_HEAD_VALUE {
    char *str;
    int len;
};
Hey, wait, there's a length here we can pass to strncmp!  It just happens to be the user response length. The wrong string length.

---

Perhaps it was not obvious (to the programmer making the change) what the buffer length of the computed_response is.  BTW, it is really a buffer length, not a string length.  It might be declared as
typedef uint32_t MD5_hash[MD5_SIZE_IN_WORDS]
or
typedef MD5_hash[MD5_SIZE_IN_BYTES]
Or it might have been malloc'ed.

The code might have made some provision for changing eventually to a non-MD5 hash, so you would not want to hard-wire the MD5_SIZE into the code. The code doing the comparison might not need to be aware that the hash involved was MD5.

---

Possibly the hypothetical strcmp code was actually more secure than the strncmp code - so long as both hashes were guaranteed to be null terminated.  So long as the user response was guaranteed not to overflow any buffer allocated for it.

---

But come to think of it, optimized network code probably does not copy the user response from the header into a separate newly allocated string.  It probably sets AUTH_HEAD_VALUE.str to point directly into the buffer containing the headers.

(Or at least the buffer containing part of the headers.  If the headers are split into several buffers... well, that's a bug that has been seen before.)

So, it is probably not "naked null-terminated C string" data.  Probably not:
strcmp(computed_response, user_response)
But if it were, then
strncmp(computed_response, user_response,
            max(strlen(computed_response),strlen(user_response)) )
might have been better.  Really that is equivalent to strcmp - but at least it might silence the security audit tool's warning about strcmp. Replacing it by equally annoying warnings about strlen instead of strnlen.

---

But - the code audit / lint tools that might have triggered this may not have been security oriented. They may have been using a buffer overflow detector like valgrind or purify. These may have warned about read accesses beyond the memory allocated to hold the hashes.

Strictly speaking, neither strcmp not strlen need to perform buffer overflow read memory accesses, if given properly sized null terminated C string arguments.  But... if an "optimized" strcmp or strlen is used, it is common to process things 4 or 8 bytes at a time - the number of bytes that fit into a 32-bit or 64-bit register - in which case the code might read beyond the end of the memoryu allocated for the string, past the terminating null byte.  In past lives, when I was more the "low level assembly code optimization guy" rather than a security guy, I have written such code.  It is hard to write optimized strcmp and strlen code that doesn't go past the terminating null, that still runs faster than doing it a byte at a time. Even fixed length strnlen, strncmp, bcopy, memcpy are hard to write using registers wider than the official granule size, without going past the end.  Which is one reason why I advocate special instruction hardware support, whether RISC or CISCy microcode.

Examining my motivation

When I heard about the IAMT bug, my first reaction was "I told them so - I told them that the ARC embedded CPU would lead to security bugs."  Yes, really, I did (see another post).

But I also said to myself "That's what you get when you have the B-team working on an important feature".

Face it: at Intel, at least when the manageability push started, the A-team worked on CPUs, hardware and software.  The guys working on chipsets and I/O devices were usually understaffed and in a hurry.

I don't like thinking such uncharitable thoughts. Moreover, chipsets and I/O devices are more and more important.  One of the smartest guys I know told me that he decided to go into chipsets because there were too many guys working in Intel CPUs, too much bureaucracy, while in chipsets he could innovate more easily.

So I imagined these scenarios, about code reviews and security audits and lint tools
Security Auditor: strcmp is insecure, this code must be changed to use strncmp
and making changes in a hurry, as a way of imagining how such bugs could have happened.

Doesn't excuse the bugs.  But may be more productive in determining how to prevent such bugs in the future, than simply saying (as I heard one security podcaster say): "This is unimaginably bad code.  One wonders how Intel could ever have allowed such code to pass code reviews and security audits.  One wonders if one should ever trust Intel CPU security again."

Conclusion

My overall point is that code reviews, security audits, and tools such as security lints or buffer overflow detectors, may have triggered code changes that introduced this bug to originally correct code.

This is no excuse.  It is even more important to review code changes than original code, since bug density is higher.

Of course, it is possible that the bug was present since the code was originally written.

'via Blog this'

Tuesday, May 09, 2017

Problem Gluing Outlook to AutoHotKey – Customer Feedback for Nurgo Software

Problem AquaSnap Gluing Outlook to AutoHotKey – Customer Feedback for Nurgo Software (AquaSnap tiling window manager):

Problem Gluing Outlook to AutoHotKey using AquaSnap

---+ BRIEF

 App windows like Outlook do not have AquaMagnet attraction to my AutoHotKey GUI windows, although the AutoHotKey window is magnetically attracted to Outlook. AquaGlue does not work between these windows.

Q: is this a known problem?

 Q: are there particular settings of the AutoHotKey window that make it visible to AquaSnap?


 ---+ DETAIL

 Not a feature request - more like a problem that I am hoping somebody has already solved.

I just purchased AquaSnap and TidayTabs, with one primary motivation:

I have written an AutoHotKey GUI script that creates an AHK GUI window that contains buttons that are shortcuts for associated applications. E.g. I have on of these AHK "button pads" with shortcuts for Outlook - touch buttons like Archive, file to particular hi/lo priority folders, etc.

I call this AHK-gUIx-SEBP. AHK=AutoHotKey (I have versions in Python, and probably soon in JavaScript). gUIx - Graphical User Interface eXtension. SEBP = Self Editing ButtonPad.

Usually I use this with Outlook on an external monitor, and the AHK buttonpad window on my Surface Book touchscreen. Rather like a MacOS TouchBar, except on steroids, and editable by me, on the fly.

But sometimes I want to use this with both Outlook and the gUIx-SEBP window on the same screen, with no external monitor. In which case I want all of the tliling window manager goodness that something like AquaSnap can provide.

E.g. I want Outlook to occupy most of the screen, with the AHK-gUIx-SEBP in a narrow strip on the right. (Or left. Or top. Or bottom.) Draggable to change the amount of space dedicated to each window.



 Here's the probem that I hope that some AquaSnap user has already figured out:

Resizing my AHK-gUIx-SEBP window gets AquaSnap's magnetically attracted to the Outlook window. But not vice versa - the Outlook window does not get magnetically attracted to the AHK-gUIx-SEBP window. AquaSnap glue does not work. And I cannot grab the edge and have both resized together.

This may well be caused by particular settings of the AutoHotKey window. It does not appear in task bar, but in the notification area. It is optionally AlwaysOnTop, semitransparent for the usual reasons.

I can play around with AHK settings and see if I can make it work with AquaSnap, but I can hope that somebody has already done so, and save me some time."



'via Blog this'

Tuesday, May 02, 2017

Emacs Fontsets for new Cygwin SurfBook

I recently switched from a MacMook to a Microsoft Surface Book (UNIX/MacOS is nice, but Microsoft is more innovative wrt form factor).



Of course I am still using EMACS.  This go-around using emacs-w32 rather than XWin based emacs.



The default font (fonsets, faces) for emacs-w32 were well-nigh unreadable, at least when scaled up onto large high resolution monitors. Much too thin, too light.



;; Microsoft / emacs-w32
;; w32-standard-fontset-spec
;; "-*-Courier New-normal-r-*-*-13-*-*-*-c-*-fontset-standard"
I first tried bolding everything

(set-face-attribute 'default nil :weight 'bold)
but this lost places where the emacs faces were already using bold to indicate something.

So eventually I figured out how to create new fontsets out of existing fonts on the system
(create-fontset-from-fontset-spec "-outline-Lucida Sans Typewriter-*-*-*-mono-*-*-*-*-*-*-fontset-lucida_sans_typewriter_AGfs")
(create-fontset-from-fontset-spec "-outline-Lucida Console-*-*-*-mono-*-*-*-*-*-*-fontset-lucida_console_AGfs")
(create-fontset-from-fontset-spec "-outline-MS Gothic-*-*-*-mono-*-*-*-*-*-*-fontset-ms_gothic_AGfs")
(create-fontset-from-fontset-spec "-outline-Modern-*-*-*-mono-*-*-*-*-*-*-fontset-modern_AGfs")
(create-fontset-from-fontset-spec "-outline-Consolas-*-*-*-mono-*-*-*-*-*-*-fontset-modern_AGfs")
;; Emacs should support proportional, non-monospace font, but occasionally has problems
(create-fontset-from-fontset-spec "-outline-Arial-*-*-*-*-*-*-*-*-*-*-fontset-Arial_AGfs")
(create-fontset-from-fontset-spec "-outline-Times New Roman-*-*-*-*-*-*-*-*-*-*-fontset-Times_New_Roman_AGfs")
(create-fontset-from-fontset-spec "-outline-Lucida Sans Unicode-normal-normal-normal-sans-*-*-*-*-p-*-fontset-Lucida_Sans_Unicode_AGfs")
;; the create-fontset-from-fontset-spec above apparently create shorthand names
(and nil
  ;; DISABLED - don't load unless needed
  (set-face-font 'default "fontset-lucida_sans_typewriter")
  (set-face-font 'default "Consolas") ; MS font good for programming
  (set-face-font 'default "Courier")
  (set-face-font 'default "Courier New")
  (set-face-font 'default "Lucida Sans Typewriter")
  (set-face-font 'default "Lucida Console")
  (set-face-font 'default "MS Gothic")
  (set-face-font 'default "Arial")
  (set-face-font 'default "Times New Roman")
  (set-face-font 'default "Lucida Sans Unicode")
  )
;; Default font
(set-face-font 'default "Consolas") ; MS font good for programming
With "Consolas" being my new default font.  Oscillating between Consolas and Lucida Console.



TBD: overlap fonts in fontsets to cover the unicode characters that I want to use - French, German, Japanese, Chinese, and math symbols.

I am missing how Apple had better integrated Unicode coverage.

---

See also

EmacsWiki: Font Sets: