Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Monday, April 17, 2023

SAD: 12 year old Blogger post warning that my wiki was hacked flagged as malware

Almost 12 years ago, in 2011, I posted on my Google Blogger.com  warning that my very old wiki had been cloned, and that the clone was probably  hosting malware or the like.

Excerpts of that post:

Googling "..." today, I found my own website, comp-arch.net. / But I also found another website, waboba.info, that seems to be a clone of comp-arch.net. / This is probably a malware site, probably directing people to attack code, or at least trying to promote search engine scores. / In a way, it is flattering that comp-arch.net might be considered worth cloning. But then again, I imagine that this sort of thing is automated by the bad guys. / If you ever use comp-arch.net stuff, beware of waboba.info, and other possible clones.

Today I received the email (see below)  from "The Blogger Team, 

Saying that my post warning about possible malware violated their malware policy (which also see below)

So warning about possible malware is interpreted as malware.

IMHO it is highly probable that this was surely an automated system, no humans  involved. Unclear whether it was stupid keyword search or stupid ML -- in any  case, a probably stupid automated system. And if a human was involved, either stupid or too busy to pay attention.

This is so old, and I so seldom use Blogger anymore, that  it is not worth trying to fix. I'm only posting this because I think it's a sad comment  or example of the state of the world.

It will be amusing to see if this post is also flagged as malware.
  • Yep, so it is.
  • I very much doubt that any true intelligence, human or artificial, could point out that this post violates their malware policy.

It has been obvious that Google has deprecated Blogger and BlogSpot.   I wonder if this sort of hassle is a precursor to them taking them finally off-line. It's probably time to make sure that I've  taken out all of the content.



---+ The Email from "The Blogger Team" received 4/17/2023

 

Hello,

 

As you may know, our Community Guidelines (https://blogger.com/go/contentpolicy) describe the boundaries for what we allow-- and don't allow-- on Blogger. Your post titled "comp-arch.net mal-hacked - cloned, maybe broken into." was flagged to us for review. We have determined that it violates our guidelines and have unpublished the URL http://blog.andy.glew.ca/2011/08/comp-archnet-mal-hacked-cloned-maybe.html, making it unavailable to blog readers.

 

Why was your blog post unpublished?

Your content has violated our Malware and Viruses policy. Please visit our Community Guidelines page linked in this email to learn more.

 

If you are interested in republishing the post, please update the content to adhere to Blogger's Community Guidelines. Once the content is updated, you may republish it at https://www.blogger.com/go/appeal-post?blogId=2425290326823263574&postId=389663627174261254. This will trigger a review of the post.

 

For more information, please review the following resources:

 

Terms of Service: https://www.blogger.com/go/terms

Blogger Community Guidelines: https://blogger.com/go/contentpolicy

 

Sincerely,

 

The Blogger Team



---+ Their Malware Policy

Malware and Similar Malicious Content

Do not transmit malware or any content that harms or interferes with the operation of the networks, servers, end user devices, or other infrastructure. This includes the direct hosting, embedding, or transmission of malware, viruses, destructive code, or other harmful or unwanted software or similar content. This also includes content that transmits viruses, causes pop-ups, attempts to install software without the user’s consent, or otherwise impacts users with malicious code. See our Safe Browsing Policies for more information.




Tuesday, February 22, 2022

RISC-V extension names considered harmful



This is a special case of "stupid nonhuman friendly names considered harmful".


 RISC-V, like any instruction set, as extension. The extensions need names.  The current standard names are stupid and human unfriendly.


 For example, I  just now see email that talks about "working through the process to move Svnapot aka the former Zsn to public review".   I am the guy who originated the RISC-V NAPOT proposal for large pages.  *I*  had to do a double take to parse Svnapot -  but at least "NAPOT" appeared in there somewhere, after I mentally parsed Svanapot into S.v.napot, where  S = system, v = virtual memory, ...

 The earlier name Zsn,  I could never remember - and, again, I contributed the term NAPOT  to the virtual memory discussion.


Stupidly short and obscure names had friction. They waste mind share.




 other RISC-V extension names include


Zicsr

Zifencei

Zam

Ztso


Some of these you can guess about what they  apply to.



 Ag comment: have verbose and compact names


Z-atomic-memory-operations ==> Zam


 and so on



---



 Did I mention that 1 of the reasons given for short  obscure extension names was that they needed to fit within 8 or whatever characters on the command line processor for compiler?   



Do It (All) Upfront vs Do It As Needed

Two different styles  of getting work done:


When I started as a professional programmer, e.g. writing C  code for PC hardware,  one of my "superpowers" was that I might find a hardware manual for a device, e.g. a UART,  and I would write a fairly complete implementation, e.g. a header file that had #defines for all of the encodings used write control registers that device.

 I.e. I did it all upfront.   actually not all, but I did a lot upfront.

 This served several purposes

  •  it was a good way for me to study the device, hardware, software interface that I was using
  •  compared to only implementing what I needed at the moment, it meant that when I realized I needed something else I usually did not have to stop to look at the manual. I could usually just pick the typical name that I would use out of midair and coded and it would work
BTW I conjecture that 1 of the reasons I did things like this upfront  was that before I was a professional programmer, when I was a programmer on duty at the helpdesk at the University or a lab administrator,  the code that I wrote was often used by lots of people, several classes of students or all of the computer users at the engineering RJE sites that I used.   it wasn't just the question of what I knew was going to be used in advance. It was a question of being able to head off problems for  my users.

In my Masters degree when I implemented an instruction set decoder generator (IIRC for MIPS)  I went out of my way to implement almost all of the instructions in the instruction set. Not just the instructions that I needed for my simulator project.  This allowed me to use randomized instruction testing, and also to test my decoder tools against all of the instructions and all of the libraries on the systems I could find.

This do it upfront  approach served me in good stead for many years. 

On the other hand, it probably cost me  an extra 6 months finishing my Masters degree. :-(  But the extra knowledge I acquired was probably part of the reason Intel hired me as a computer architect NCG :-)



I remember being absolutely flabbergasted when I learned that Intel's own simulators were not capable of decoding all of the instructions in the machine. When I was told that they only added them as they were needed by performance studies.  Note that this was not when I arrived at until the 1st time, but after I left for graduate school again and then returned to until. I.e. this incremental approach was adopted while I was away from Intel.

Obviously this was the right way to do things wasn't it?   obviously Intel was successful...  Although let me note that this was the time when Intel fell behind the competition.


I give more credits to the examples of XP  extreme programming and agile programming -  work incrementally,  writing tests in advance.   which caused me to  rethink twice, and many more times, about might do it upfront attitude.

One of the big things I did learn from Intel and from XP/agile is about the cost of testing.   when I wrote a complete header file from a manual page I usually wasn't writing a test for every individual feature. Often times writing the test takes 2 or 3 or many more times more than actually writing the simple interface.

But note what i said  about a reasonably complete implementation of an instruction set decoder allowing randomized  testing, and testing using all of the actual real code that you can find. Those were tests that were easy to write. And for the most part, easy to implement the code the random correctly.

Importantly, the way that I chose to implement my instruction set  decoder generator, which also inspired Intel's current microcode approach, is decentralized.  Pattern driven. It makes it very easy to or remove arbitrary instruction  encodings.   the human programmer, me, doesn't write code that looks at bitfields at a time. The human programmer writes patterns directly derived from the instruction set tables, and has a program that generates the (hopefully optimize) code that looks at the  bitfields from the patterns.   Independent patterns make it easy to add and subtract things on the fly. Whereas  the older style of instruction set decoder  centralized code needs to be modified as arbitrary things are added or subtracted. 

So my decentralized approach makes it easy to disable particular test cases.    and therefore easy to enable the general case and disable only the cases that are too hard to test at any point in time.

But conversely, my decentralized approach also makes it easy to add things  only as needed.   Which is what those later Intel simulators did.   and as long as the decoders were complete, as long as the decoders did not incorrectly report that an unimplemented construction was something else, they  would at least catch the  unimplemented things.

What's the cost?

Well, my upfront approach often cost me time upfront, but helped me come an expert

The do it later approach might have reduced time upfront. But sometimes it produced really surprising errors -  like telling you that there was an unimplemented instruction when you knew very well that it was a totally implemented instruction. Requiring that the user who was not a  simulator expert  learn what the simulator was not implementing.  It also led to project scheduling surprises -  when you were implementing a feature,  running you traces/performance benchmarks/workloads, and all of a sudden a hitherto  unimplemented instruction was discovered. One that you had not planned on spending the time to implement.

There needs to be a balance here  between upfront cost and avoiding surprises downstream.

 for the instruction set simulator/decoder generator example, I think the cost might be along the lines of
  •  completely decoding so that you can provide a full disassembly of all instruction
  •  but not necessarily implementing them, if the implementation is too complicated
Although for an OOO micro-dataflow simulator I think you would also be reasonable to  provide all dataflow inputs and outputs, such that you could correctly implement the dataflow, albeit not necessarily with reasonable  latency characteristics.    and then to have the simulator  count and flag how many such inaccurately modeled instructions are present.


---


The fellow who espouses only doing stuff  as needed will often win out in the corporate rat race:  managers always like the lowest work estimates.

And although I mentioned that XP/agile tends to lead to incremental work, XP/agile also encourages refactoring, to reduce technical debt

 doing stuff upfront reduces technical debt.   but of course only if you actually use stuff that you did upfront.

 deferring stuff to do as needed --  well, that's the definition of technical debt, assuming you actually need it.


 Another note:  doing stuff upfront often exposes excess complexity.     Whereas defining stuff, e.g. the actual instructions yet, but not implementing it in your actual  performance   accurate simulator, hides the complexity under the carpet.     IIRC many of the unimplemented features were things like security, virtualization, encrypted memory -  supposedly not performance critical, therefore supposedly not belonging in the performance simulator.    


===


 the above thoughts are me retrospectively about computer architecture simulators, a very large part of my career.


 however, they have been inspired today by me working on speech commands so that I can control my computer without exacerbating my computeritis hand pain.


 unfortunately  the standard way to have a speech recognition program like Nuance Dragon control other applications is to emulate keyboard and mouse events. 

Note that I do not call this the BKM because it is not the best-known way -  BKM  is for the application to have a command language like GNU EMACS. But that is not the state-of-the-art in speech recognition.

 unfortunately, I have to use other applications in addition to GNU EMACS. Applications like Microsoft OneNote and Outlook and ...

  usually when  I start using an application via speech recognition, I  do several things
  •  I start by googling for keyboard shortcuts for the application in question. when I find them, it is usually a fairly simple set of text edits to convert them into a primitive command set.
  •  I start using things,  and over time I observed the need for new commands
  • One of the key things to record commands that I want to say but which are errors
  •  When I would like to have a speech command to do something that is currently on a menu tree
  •  I will typically take screenshots of menu tree, and  device commands appropriate to each of the leaves
 note: to some extent speech recognition systems like Dragon and Microsoft support allow you to navigate menu trees by a speech. But frequently you can say things much faster and more conveniently, and perhaps more importantly more memorably, more naturally, with appropriate speech commands rather than navigating "menu 1/menu 2/menu 3 ..."

  so I have been using speech commands for Microsoft OneNote for quite some time.  but today I realized that  I had not already implemented speech commands for things like drawing
  •  e.g. I said "... insert line...."   and there was no insert line command
  •  so I had to navigate through the menus " drawing >  shapes >   use arrow keys to select the graphic line  shape"
 this annoyed me. I was a little bit pissed off that I had not implemented this upfront.

 Of course, when I implemented "... draw line"  I also implemented all of 16 Items on the drawing shape menu".    with appropriate aliases  or synonyms, e.g. " insert diamond/rhombus/lozenge..."

Do I wish I had done this in upfront?   now, yes I do. However, at the time I implemented the  earlier sets of OneNote commands, I remember getting exhausted.    So perhaps it was appropriate.

 I also remember, and regret to admit that I still do not have a really good way of automating  the testing of my speech commands for OneNote.   I have much better automation for my gnu emacs speech commands,  not so much for the majority of Microsoft applications. Which was 1 of the reasons why it was exhausting. As I said above, implementing the command is often a lot easier than testing it.

 why am I writing this?

 Certainly not to say that I know the best known method, do it all upfront or do it incrementally.

Musing  myself about how to better implement some of these things. 
  •  I'm really glad that I'm using speech recognition. It is very much helping my computeritis pain.
  •  But I sometimes despair every time I need to write a whole new set of commands
  •  I would like to accelerate the process.
Thinking that  perhaps this "defer subtree" approach  was reasonable. Do all of a particular level of the menu tree upfront. But the defer subtrees until needed.



#Speech-recognition  #work-scheduling #HFIM


Monday, August 17, 2020

WANTED: better security => fewer updates => better reliability

 Wasted two hours this morning:  when I started, neither the builtin keyboard nor the trackpad of my Surface Book 3 were working.  Fortunately, the touchscreen was working,  but it is hard  to do real work without a real keyboard.

Fruitless fix recommendations from the web, eventually uninstalled recent  Windows updates,  and after a few reboot cycles keyboard and trackpad were working again.

Most recent updates were four days ago.  I remember wondering if I should delay the update.  but the desire to use recent security fixes won out.

 I do not know  with the certainty that the updates caused my problems this morning.  It might've been a coincidence,  a random disk error. 

But...  I feel pretty much sure that I wouldn't be applying updates anywhere near as frequently  if there were not security issues. Otherwise, why breaks things that are working?    and software updates are one of the biggest causes of computer problems, both for me and for other people.

This is one of the reasons why I work on security technologies like capabilities/checked pointers and  memory tagging:   not just security, but also to increase reliability.

 Of course security and reliability go together. The very concept of a denial of service attack means that security is reducing reliability and availability. But it's not just DOS attacks -  the very act of a software update, for security, reduces reliability.

---

 Unfortunately, I never really got to see my security projects to completion at Intel: MPX  is a piss poor version of what I wanted to do with respect to capabilities/checked pointers.  

I hope that I can such security technologies become real as part of RISC-V.



Thursday, August 13, 2020

Spicy Food Cures Bad Allergy Attack !?!

 BRIEF: multi-day allergy attack seems to have been cured by spicy food !?!


A couple of days ago I went for  a run in the evening (OK,  more like a jog/walk),  and came back with a really horrible allergy attack. My left eye  almost completely swollen shut,  my right eye sore, my nose running. Inhaler helps with wheezing, but not  with the other symptoms.

I had been planning to work later that evening, at least read email, but I lost the rest of that evening, the entire next day, and into the day after that.  Not to mention  losing two nights of sleep,   propped up  for drainage. Using a CPAP is disgusting when the mask fills up with nasal drainage.  (Not mucous - that would be really disgusting - but still.)  Working when your eyes are swollen shut is also a challenge.

Things eased a bit by the second day, but still bad.  I consulted Dr Google. Home remedies recommended included

  1. Flush - e.g,. with a Neti Pot
  2. Spray - e.g. Afrin decongestant spray, or simple saline
  3. Hydrate
  4. Steam
  5. Spice
I did not have a Neti pot, nor distilled water, nor the slt-like packets that Neti pots come with.  I have tried Neti pots in the past, without much success, although many recommend them.

Spray - I used Afron fgrom the beginning.  Helped, but not enough.

Hydrate - here I just was wasn't thinking.  Apparently dehydration is often the cause of such nose/sinus/eye symptoms - "dehydration makes your nose and eyes water?"  At the moment, pretty much all that I drink is water, lemon juice, tea and occasional cola.   Lots of caffeine, especially when I am trying to work while sick.  I was probably making myself worse.

Steam: head over a bowl of hot water under a towel. And.or hot bath/shower.  Helps, but not enough.  (Even pre-covid taking a sauna at the gym with cold-like symptoms is frowned upon.)

But that last suggestion: SPICY HOT FOOD.  That I can do.


So I did: a package of vegetable korma, augmented by sriracha and other hot sauces. And, within an hour or so, I was much, much, better.


Next time, I try the hot sauce as soon as I feel  the allergy attack coming on!  I wish I had not waited so long.


---


Other:


I was desperate enough that I drove to the coast to try to escape the Willamette Valley's cloud of pollen and dist from haying, winnowing, and field burning.  Helped, but not enough.  BTW, it may not have been smart to drive with one eye swollen shut and nose running.  I was desperate!


During the attack one particular place in my nose BURNED - as if I had inhaled a prickle burr and it had stuck.  Even now, days later, I still feel this hot spot.



---


Posting this


a) As a sad commentary on my current lack of social life.  This is the sort of thing you might mention in the break room at the office.


b) As a reminder to myself for next time


c) In the hopes that it might help someone else...  My blog is unlikely to be high in Google searched by a fellow sufferer.   TBD: find a newsgroup or forum for allergy sufferers.

Sunday, June 21, 2020

ISO bicycle turn signal SYSTEM - wireless controller, pairable with multiple signal lights/colors/

I love bicycles: in my younger days, road bikes, then touring and cargo bikes, now an e-bike

My rides always involve busy roads that have no shoulders, steep hills (coasting speeds > 30mph) , crossing traffic lanes, often only 1 lane in each direction,  and no protected crossings where I need them.  I feel that I need a rearview mirror and turning signals.  This post is mainly about turning signals, although 

While it would be great if the turn signals could connect to the eBike batteries. That's not a requirement. Lights with batteries, or, better, USB rechargeable, would be fine.   


Wireless turn signal controls are all over the web, including Amazon. As far as I can tell, the wireless bicycle turn signals that I have found via Google and Amazon have one controller, and one or two signals.  There is no ability to pair extra signal lights to the same controller.
 
It seems to me that it should be possible to have a SYSTEM of wireless turn signals for a bicycle.  

A wireless turn control mounted on the handlebars or nearby.

Multiple sets of lights - not just 1 or 2, but

Multiple sets attached at different places on the frame.   
  • E.g. on the rear so that not hidden when I am carrying a full load of cargo; 
  • on the seatpost; 
  • on the handlebar ends; 
  • possibly integrated in rearview mirrors mounted on handlebar ends
  • Less common
    • ?pedals?
    • ?front facing turn signals?
    • sideways -- e.g. wheel lights.
Different colors: amber, red, etc.

Different turn indications: flashing, arrows...  Heck, I would like an "I am slowing down indicator" when I am descending a steep narrow road at speed, with a car right behind me.

Different attachment styles.  E.g. rings suitable for seatposts and handlebars; handle ends; velcro straps for my e-bike, which doesn't have the thicker posts where such lights would need to be.

As with any such battery driven lights, redundancy - two lights, either side, either direction - would be best, for the case where one battery runs out.  Quick release to make it easy to charge them at home or office, plus to protect against the theft of the accessory light, as when parked at grocery store.