Tuesday, May 18, 2010

Miscellaneous things on my mind today

Thinking some more about that markov->ANN mapping, and wondering if
there'd be a better way to represent the signals from the inputs to
the first layer. Ideally, that'd involve getting the symbols to
correspond to a value on a meaningful range, rather than a set of
discrete values. Let's say I wanted to make that integer range
[0,2

Monday, May 10, 2010

Miscellaneous things on my mind today

It occurs to me two fic geeks might find reading through the TVTropes pages for their favorite pieces of fiction a romantic way to spend an afternoon.

--

Pondering a message passing communication system tuned for bulk data (parcel) transfer. Use an addressing system like email's: To:username@example.com

Sender, relay and recipient nodes operate on a load-sharing basis similar to BitTorrent. Where with email, a sender has a single SMTP target, this system might have multiple such targets, each of which serve as feeders for the destination recipient(s). Relaying nodes may cease feeding once either all recipients' nodes have full copies of the parcel, or once a TTL metric has been met.

Sender-signing should be explicitly supported, both as a means of ham/spam separation and as a means of authentication/accounting where necessary.

Parcel bodies should be compressed and encrypted by default.

--

Just got the geek necessities
The simple geek necessities
Don't worry 'bout your food or your hygiene...

--

If you're standing on Four Corners, and you do something that's a crime under each state's law, which state's prosecution takes priority?

--

I don't think backprop is relevant to the HMM->ANN mapping, because ANN construction is done as a component of its training, and weight adjustments are done incrementally as each node gains experience, by the node itself. No final "result fitness" observation is done, because the from-map ANN inherently evolves the same way the HMM would.

Actually, looking at it further, what I have in mind could be considered a variation of a "cascading neural network", except the weights aren't fixed on node creation. Also, I don't see reference to any types of networks that allow for deconstruction of nodes.

--

I like this way of posting my thoughts; it keeps me from being the uber-spammer on the social networks I'm in. Plus, it's like a serialization of free association in a palatable form.

--

Hm. German. shadenfrau. Heh.

--

Need to start carrying my 3e/PF books around. Or PDFs thereof.

--

Barcamp presentation ideas: 1) Rosetta Code and programming chrestomathy. 2) Wikis and more effective cross-referencing.

--

I once had a conversation with a guy I think might have been high, where we thoroughly enjoyed a conversation that didn't keep a single context and topic for longer than three sentences.

--
:wq

Sunday, May 9, 2010

To learn, to Create, to Think

To learn: Surround yourself with people who know more than you. Can't find any? Try another topic. Whatever you do, don't stop.

To create: Think about different things at the same time.

To think: You'll have to figure this one out on your own.

--
:wq

Friday, May 7, 2010

Miscellaneous things on my mind today.

There's a striking similarity between hidden markov models and neural networks. For a 1st-order HMM, you could map it to a 4-layer (1 input, 1 output, 2 hidden) feed-forward ANN by having a single input (what symbol you're facing), making every known symbol a 2nd-layer node, with a simple input value of 0/1 (the first layer's node signals the input of the corresponding second layer depending on which symbol corresponds to that node). Use the set of known symbols for the 3rd layer as well, and the mapped weights from the 2nd to the 3rd layer corresponds to the state change probability in the HMM.

I don't think this is necessarily an efficient way to do it, but I think it might be an exact mapping. Cool. :)

If you want a second-order Markov model, use two nodes in your input layer, and use S1,S2 pairs for your 2nd layer, with a single-symbol set for your third. Definitely not a space-efficient way to do it, but also an exact mapping. If, after training (you have to get your symbol set from somewhere, which means you're going to grow your ANN as you build your symbol set), you're comfortable with the idea, you can cull your absolute weakest relationships from your 2nd-order set. Only cull as space limitations force you to, or you'll lose weak-but-significant relationships.

A 3rd-order Markov model would have three input nodes, and use S1,S2,S3 tuples for the 2nd layer, and a single-symbol set for the third. Again, cull as required.

For particularly interesting results, you can use a three-dimensional matrix I1 going to a 1st-order ANN mapping, I1 and I2 going to the 2nd-order ANN mapping, I1, I2 and I3 going to the 3rd-order ANN mapping, and map to the same output node for all three orders. (Or higher orders, if you have unbelievable processing and storage capability...)

--

I'm happy.

--

If every piece of software got as much detail-oriented attention to it as Toyota's code is getting right now, but *before* disaster struck, there'd be millions of happier developers out there, software wouldn't suck as much and development and debugging tools would be absolutely incredible (far beyond anything I've even heard of, including things like valgrind, lint and VS2010's rather impressive debugging aid feature set) as a result of finding ways to reduce developer time spent in analysis and bugfixing. If software were nearly as difficult to consider "ready for release" as hardware is, I think you'd see something akin to Moore's Law occur in software utility. (As a mixed metric of performance and feature set.)

People got used to mediocre software. They hate it, but they got used to it. As a result, software pricing models more or less standardized around writing mediocre software, and people get sticker shock when they hit a requirement for non-mediocre software. Put another way: "Fast, well, cheap" -- except that popular priority heavily weighs toward fast and cheap.

--

Flashbacks to Ender's Game when I accidentally try to call CString::GetBugger(). See also, Start->Run->%APPDADA% instead of %APPDATA%. Dargo, resolve his %APPDADA% for him!

--
:wq

Thursday, May 6, 2010

Things on my mind today...

Prime words. If one were to present thousands of people with the words in the dictionary, and ask them to replace the word with a small set of simpler words with near-identical resulting meaning, would a set of fundamental, or "prime" words result? What would they be? Some circular definitions might result, but their presence might help identify fundamental concepts.

What happens if you could influence the Shadow ships' pilot with LSD or PCP? (If you haven't seen through Season 4, don't bother guessing. If you don't know what I'm talking about, then nothing's spoiled.)

What happens if a vampire or lycanthrope bites a bio-tech vehicle?

I need to make a habit of going to work earlier so I can get enough hours in before early afternoon. OTOH, I've got a ton of vacation hours saved up, and that would be a worthwhile way to spend them. :)

I broke someone's nerd-o-meter today. I feel good.

I seem to be discovering something akin to alchemy and herbal remedies, but using fast and processed foods for ingredients. That can't be a good sign.

SanDisk seems to have abandoned the SD+USB product area. [trope]BigNo[/trope].

--
:wq

Monday, May 3, 2010

Dave, my mind is going. I can feel it.

Gotta do some thought-out codin'
but my heart is all a'flutter.
Wrote a line of code
but I can't write out another.
Gotta get my brain in gear
but it's slippin like it's butter.
If I don't seee Firefly soon,
I'll be drinkin' like a Mudder.