Friday, May 7, 2010

Miscellaneous things on my mind today.

There's a striking similarity between hidden markov models and neural networks. For a 1st-order HMM, you could map it to a 4-layer (1 input, 1 output, 2 hidden) feed-forward ANN by having a single input (what symbol you're facing), making every known symbol a 2nd-layer node, with a simple input value of 0/1 (the first layer's node signals the input of the corresponding second layer depending on which symbol corresponds to that node). Use the set of known symbols for the 3rd layer as well, and the mapped weights from the 2nd to the 3rd layer corresponds to the state change probability in the HMM.

I don't think this is necessarily an efficient way to do it, but I think it might be an exact mapping. Cool. :)

If you want a second-order Markov model, use two nodes in your input layer, and use S1,S2 pairs for your 2nd layer, with a single-symbol set for your third. Definitely not a space-efficient way to do it, but also an exact mapping. If, after training (you have to get your symbol set from somewhere, which means you're going to grow your ANN as you build your symbol set), you're comfortable with the idea, you can cull your absolute weakest relationships from your 2nd-order set. Only cull as space limitations force you to, or you'll lose weak-but-significant relationships.

A 3rd-order Markov model would have three input nodes, and use S1,S2,S3 tuples for the 2nd layer, and a single-symbol set for the third. Again, cull as required.

For particularly interesting results, you can use a three-dimensional matrix I1 going to a 1st-order ANN mapping, I1 and I2 going to the 2nd-order ANN mapping, I1, I2 and I3 going to the 3rd-order ANN mapping, and map to the same output node for all three orders. (Or higher orders, if you have unbelievable processing and storage capability...)

--

I'm happy.

--

If every piece of software got as much detail-oriented attention to it as Toyota's code is getting right now, but *before* disaster struck, there'd be millions of happier developers out there, software wouldn't suck as much and development and debugging tools would be absolutely incredible (far beyond anything I've even heard of, including things like valgrind, lint and VS2010's rather impressive debugging aid feature set) as a result of finding ways to reduce developer time spent in analysis and bugfixing. If software were nearly as difficult to consider "ready for release" as hardware is, I think you'd see something akin to Moore's Law occur in software utility. (As a mixed metric of performance and feature set.)

People got used to mediocre software. They hate it, but they got used to it. As a result, software pricing models more or less standardized around writing mediocre software, and people get sticker shock when they hit a requirement for non-mediocre software. Put another way: "Fast, well, cheap" -- except that popular priority heavily weighs toward fast and cheap.

--

Flashbacks to Ender's Game when I accidentally try to call CString::GetBugger(). See also, Start->Run->%APPDADA% instead of %APPDATA%. Dargo, resolve his %APPDADA% for him!

--
:wq

No comments:

Post a Comment