Friday, December 25, 2009

Best Christmas Eve Ever

My brother is in town. We watched a bunch of movies together. I introduced him to The Usual Suspects, he then introduced me to Cypher, to which I responded with Paprika.

Coincidentally, that turns out to be an awesome sequence if you like mysteries and plots that twist your brain into pretzels. It's like the hard-liquor cocktail of mystery/thrillers. Watched in that order, those movies' themes flow into each other; The first two have someone unseen pulling the strings, while the latter two make it difficult for the character to discern what's real and what's not. (Paprika takes this to extremes...)

I need to show him Dark City next if I still have the BD from Netflix. I expect doing all four of those in a marathon session would be like a Pan Galactic Gargleblaster. I'd probably throw Dark City somewhere between Cypher and Paprika in the sequence; It shares some of the 'unseen strings' characteristics with Cypher, albeit not quite so dense, and some of the uncertainty about reality, albeit not as dense as that of Paprika.

Tuesday, December 22, 2009

Techtalk Tuesday: Nexus bot

This was an idea I've been chewing on for a while, now, and something I've been thinking about doing. Simply put, it's a chat room bridge, but it's not that simple.

Normally, I sit in rosettacode, haskell, proggit, perl, tcl and any number of other channels, ready to offer insight or assistance, or even just observe, if someone mentions Rosetta Code. What Would Be Nice would be if I could just sit in rosettacode, and let a bot handle it for me.

The general sequence might go as follows:

proggit - * soandso thinks Rosetta Code needs better algorithms
rosettacode - * soandso thingks Rosetta Code needs better algorithms
proggit - soandso: What are you looking for?
rosettacode - <#jubilee> soandso: What are you looking for?
rosettacode nexusbot: soandso: Did you see this category? (some url)
proggit - soandso: Did you see this category? (some url)

Nexusbot has to perform several complicated behaviors there, so let's look at them.

First:
* soandso thinks Rosetta Code needs better algorithms

"Rosetta Code" matches one of nexusbot's highlight rules for forwarding to rosettacode, so nexusbot relays the message to rosettacode, thinks of it as a "connection", and associates soandso as a primary for that connection, with a most recent related activity timestamp attached to his association with the connection.

Next:
soandso: What are you looking for?

soandso is associated with a current connection (and that association hasn't timed out), and jubilee just said something to him. nexusbot associates jubilee with soandso, and, through soandso, to the relay to rosettacode. jubilee is attached to the relay with his own related activity timestamp, copied from soandso's.

rosettacode - nexusbot: soandso: Did you see this category? (some url)

shortcircuit addresses nexusbot, and indicates he's addressing soandso through nexusbot. Nexusbot sees that soandso is associated with a connection between rosettacode and proggit, associates shortcircuit with that connection (along with a recent activity timestamp), and passes shortcircuit's message along to proggit

Each time someone triggers a highlight, they're considered a primary for the connection that highlight creates (or would create, if it exists already), and their "recent related activity" timestamp is updated. Each time someone talks to a primary for a connection, they're also associated with the connection, and their "recent related activity" timestamp is set to that of the primary's.

Whenever a primary or secondary talks, their communications are relayed across the connection, but their RRAs are not updated.

When a primary's RRA grows old past a certain point, they're disassociated from the connection. When all of a connection's primaries are gone, the connection is ended.


There are a couple scenarios this logic doesn't quite resolv. What if jubilee is a channel champion, someone who talks to everyone, and who everyone talks to? It's probable that his side of a conversation with someone else would leak across the channel. What if someone talks to a secondary on a related subject, but he doesn't trigger a hilight keyword? Well, that line would be lost.

No solution is perfect.

Now to deal with the Big Brother concerns. Ideally, nexusbot would only be in a channel if he were legitimately asked to be there. That means only a /invite, and preferably checking to see if the user who sent the invite is, in fact, in the destination channel. Ideally, nexusbot would only be in a channel until he were asked to leave. That means no autojoin after a /kick.

There's also the consideration that it should let someone who's in authority in the channel know they're there and what they are, and offer a command set to control the bot's behavior in the channel.

Random braindump of possible commands:

HILIGHT LIST/ADD/REMOVE [#channel] -- lists, adds or removes a hilight rule, optionally associated with a channel. Lists include who requested the hilight rule, and when.

RATELIMIT GET/SET -- get or set the maximum number of lines per minute.

LINEBUFFER GET/SET -- get or set the size of the buffer for queuing lines if the ratelimit is hit.

REPLYMODE USER/HIGHLIGHT/CHANNEL/AUTO +/- b/m/v -- treat connections derived from highlights or associated with particular remote channels as channels themselves, and allow some channel modes like +/- m to be applied to them. Likewise, allow user modes like +/- b and v to be associated with remote users. AUTO means having the bot automatically sync its remote user modes (as they apply in that channel) with the channel's mute, voice and bans.

Ideally, only channel members with +O or +o would have access to the setter commands.

Monday, December 21, 2009

Frameworking realtime communications

While looking into flexible ways of writing an IRC bot(the nature of which will probably be in a "Tech Tuesday" post for tomorrow; It's a comm routing bot, so don't complain about IRC != IM quite yet.), I tried using IM libraries. I started with libpurple, because I use Pidgin all the time. That was utter failure, as I couldn't find enough organized documentation to even write a stub quickly.

Then I tried playing with telepathy. I wasn't able to get it to work in the few minutes I had left that evening, but I did learn a few things.

First, it's awesomely flexible, and there are tools that exist that allow a GUI user to essentially interact with it the same way your program might, which let me immediately dive in and try testing some capabilities.

Second, it's missing a decent IRC backend. The closest was "haze", which I couldn't get to connect to more than one IRC network at a time. Turns out Haze is just a glue module between Telepathy and libpurple, and the one-IRC-connection-at-a-time thing is a libpurple limitation. (I'm glad I didn't waste my time thoroughly studying the libpurple header files; It wouldn't have done what I needed, anyway.) I might be able to use multiple libpurple instances, but I don't know how safe that would be; libpurple uses a common filesystem location, I don't have control over it through Telepathy, and I can't trust it to be multi-instance safe in its access patterns.

Telepathy is interesting because it allows multiple connections, any of which routed through any number of connection modules. One could conceivably create a program that talks to fifteen different networks using one or as many protocols.

Darn cool. I wish it had better support for IRC as a client (and maybe as a server...who knows what kinds of options that opens up?). Support for things like identi.ca and Twitter would be pretty cool as well.

Sunday, December 20, 2009

Computer cable rerouting and bundling (and pics)

I rerouted some cabling, replacing a hodgepodge of miscellaneous SATA cables with a bunch 90-degree-ended cables I'd bought from Digi-Key.

This is the best I can do for now, until I get another SATA controller (so I can use SATA instead of PATA fo that bottom drive) and replace the power supply with one with modular cables.

Cable rerouting



Poking around the inside of my computer

Nighttime blues

In a pinch, I also wound up using some blue electrical tape to bind a coil of some other cable. I wrapped it sticky-side out, then doubled over and wrapped it sticky-side in. Net effect is that it's just rubbery plastic, no sticky adhesive anywhere.

electrical tape for temporary binding

Friday, December 18, 2009

Bass by phase modulation.

I want to take two 50KHz synced tone sources, invert them, then phase modulate so that the difference between the two represents my actual signal.

I'm really seeking to produce a high power signal in the 0-50Hz range, but the phase modulation approach has three advantages. First, the high carrier frequency can be deadened and insulated much more easily than a pure low frequency signal. Second, the emitting devices don't necessarily need to be as large as a corresponding cone. Third, they don't need to be directly attached, as with bass shakers, making seating and such easier to manage.

The first problem with the phase modulation approach is the carrier signal; You don't want it to be within the audible range of any human or other hearing animal.

The biggest up-front problem, though, is managing the positioning of the interference nodes. Having a high carrier frequency solves part of the problem by reducing the distance between the nodes. The smaller the distance, the lower the likelyhood of a positive node being in one ear, and a negative node in the other. (Wouldn't want to scramble your brains, now, would we?)

Monday, December 14, 2009

QOTD

"I went back in after I wrote them and added all sorts of weasel words. It sort of saps the punch from the statements, but it's one of those things I've learned that I have to do to avoid the more egregious forms of willful misunderstanding. " -- Raymond Chen

"Willful misunderstanding" ... That's an issue I've been contemplating for a few months, now.

Reboots and Automatic Updates

I couldn't check last week, as I was out of the office sick, but someone had asked if anyone's system had done an uncommanded reboot in response to automatic updates.

I thought I'd go through my workstation's event log, and I've found a two interesting log entries, so far:

Installation Ready: The following updates are downloaded and ready for installation. This computer is currently scheduled to install these updates on ‎Wednesday, ‎December ‎09, ‎2009 at 3:00 AM: - Windows Malicious Software Removal Tool x64 - December 2009 (KB890830)
and

Installation Ready: The following updates are downloaded and ready for installation. This computer is currently scheduled to install these updates on ‎Wednesday, ‎December ‎09, ‎2009 at 3:00 AM: - Cumulative Security Update for Internet Explorer 8 for Windows 7 for x64-based Systems (KB976325) - Windows Malicious Software Removal Tool x64 - December 2009 (KB890830)

Now, I have Windows set up to download only. It's supposed to wait for my confirmation before I install. Notice that those two events include the phrase, "This computer is currently scheduled to install these updates on Wednesday, December 09, 2009 at 3:00AM" ... I didn't tell Windows it could do that. Or, at least, that's not what the normal interface indicated it was set up to do. (I don't make a habit of digging through Administration Tools and tweaking things, as I don't know the full impact of most of what's in there.)

Without finding the setting that automatically schedules updates for next-day installs, even when you tell it to download and wait for confirmation, the only obvious solution is

The only solution to uncommanded reboots, as far as I can see, is to tell it to not even download the updates unless instructed. Not as convenient as having it download in the background, but it saves you the hassle of having a machine reboot on its own, leaving you scratching your head as to why.

Update

Ok, something reset my Windows Update preferences. I just checked again, and it was set to "Install Automatically."

Saturday, December 12, 2009

Coax and Engrish

A few quick photos I took this evening.

Coax 1 - LightCoax 2 - DarkCoax 3Some Engrish

Friday, December 11, 2009

Remote control

@coderjoe: You don't keep the remote for the receiver over here any more?

Me: I haven't seen it in a week and a half...

Wednesday, December 9, 2009

A very simple idea for a browser extension

DOM-walking shortcut keys: hjkl. If you've used vi or Google Reader, you probably know where I'm going with this.

H: Switch focus and highlight to parent DOM node.
J: Switch focus and highlight to previous sibling DOM node.
K: Switch focus and highlight to next sibling DOM node.
L: Switch focus and highlight to first child DOM node.

Something like that would make browsing most blogs' and forums' comment sections much more convenient.

It's simple enough I'd probably do it if I knew how to make a Firefox extension. :-|

Hypoallergenic detergent

So we now have one of those front-loading washers that uses a lot less water. Problem is, even using the "extra wash" and "second rinse" cycles without soap, I still have to put it through a second run of the above to get all of the soap out, and we're already using less than the normal amount of soap specified in all the directions.

More details on the washer: It supports a normal wash, rinse, spin-dry sequence, same as any other. It also supports an "extra wash" and a "2nd rinse". I usually run loads with "extra wash" and "2nd rinse" enabled, to get more water through to dilute out the soap.

Why is it important to get all the soap out? It gives my grandmother hives. We're using "Hypoallergenic Purex UltraConcentrate h.e", and even with five soap-free rinses (one normal run with extra wash and 2nd rinse enabled, one soap-free run with the same), there's still enough soap remaining to cause allergic reactions.

Does anyone know of something more hypoallergenic than the stuff we're already using?

Vinegar, Venom and salt

Vinegar, venom and salt
I'm counting all your faults
When the time is right
The pen could write
And joy would come to a halt

Vinegar, venom and salt
They're quite the painful thoughts
I see the fool
I see the tool
Words can be quite the assault

Vinegar, venom and salt
They must be kept in a vault
Padlocked, lost key
No-exit, you see
Void the words and their poisonous waltz.

Tuesday, December 8, 2009

Looking at a longer Winter in Michigan

We're looking at having a longer winter here in Michigan.
You see, there are only two seasons here: "Winter" and "Under Construction"
They're talking about losing federal dollars for road maintenance, as Michigan can't put up the matching dollars.
So we're looking at the road maintenance budget being cut to less than half.
The "Under Construction" season would thus be shortened.
Ergo, we're looking at having a longer winter here.

GPG and signed email

I've started using GPG to sign my email. It was easy; Install FireGPG and generate a key.

In order to send signed emails, FireGPG contact's GMail's SMTP server directly. Fair enough, but that got me thinking...What about having an SMTP server that only delivered signed emails where the signature checked out against some public keyring, and the signer wasn't marked as unauthorized due to abusive behavior? You could have an anonymous relay that operated in that fashion.

Add in a "X-Server-GPG-Signature" header in the email, and an email provider using such a technique could garner a decent reputation, and thus get more or less a pass by any anti-spam filters in the next stage of the email relay.

I'm sure the idea isn't new. I suspect, though, that all that's needed are a few seed SMTP servers that operate in this fashion.

Two puffs every four hours and a vaccine

So all the crap that's in my lungs is dead, but there's still a lot down there. Plain old albuterol inhaler is going to increase coughing and accelerate getting all that crap out. Codeine cough syrup apparently keeps me awake, so some numbing agent from the Novocaine family will help me sleep at night instead.

Meanwhile, I also got the H1N1 vaccine. Whether or not I'm just getting over H1N1 (they just don't test for it any more; Tests come back positive in Kent County more often than not), the vaccine is a good move for me avoiding carrying something that could infect my grandmother. I specifically asked my doctor about whether or not I could take the vaccine, and, as it turns out, high fever is the disqualifier, and I'm already over that.

Techtalk Tuesday -- Video editing and image compositing

So I've been thinking about video editing, Linux, and how much I hate Cinelerra.

Now, I don't know a lot about the internals and features of existing video editing tools, but I at least know some of the basics. First, you produce a series of images at a rate intended give the illusion of movement. Let's look at a single point in time, and ignore the animation side of the equation. Let's also focus on visual (as opposed to auditory) factors.

You have source images, you have filters and other transformations you want to apply to them in a particular order, and you want to output them into a combined buffer representing the full visualization of the frame.

Let's break it down a bit further, along the lines of the "I know enough to be dangerous" areas.

Raster image data has several possible variations, aside from what is being depicted. It may have a specific color space, be represented in binary using different color models (RGB vs YUV vs HSL vs HSV), may have additional per-pixel data (like an alpha channel) thrown in, and the subpixel components can have different orderings(RGB vs RBG), sizes(8bpp to 32bpp), and even formats (integer, floats of various sizes and multiplier/mantissa arrangements). ICC color profiles fit in there somewhere, too, but I'm not sure where. There's even dpi, though not a lot of folks pay attention to that in still imagery, much less video. Oh, don't forget stride (empty space often left as padding at the end of an image data row, to take advantage of performance improvements related to byte alignment.).

Now let's look at how you might arrange image transformations. The simplest way to do it might me to organize the entire operation set as an unbalanced tree, merging from the outermost leafs inward. (Well, that's the simplest way I can visualize it, at least). Each node would have a number of children equal to the number of its inputs. A simple filter would have one input, so it would have one child. Any more inputs, and you have a compositing node. An alpha merge, binary (XOR/OR/AND) or arithmetic(subtract, add, multiply, etc) merge would be two-arity, while a mask merge might be three-arity.

Fortunately, all of this is pretty simple to describe in code. You only need one prototype for all of your image operations:

void imageFunc(in ConfigParams, in InputCount, in BUFFER[InputCount], out BUFFER,)
{
}

An image source would have an InputCount of 0; It gets its data from some other location, specified by ConfigParams.

So assuming you were willing to cast aside performance in the interests of insane levels of flexibility (hey, i love over-engineering stuff like this; Be glad I left out the thoughts on scalar filter inputs, vector-to-scalar filters, multiple outputs (useful for deinterlacing), and that's not even fully considering mapping in vector graphics.), you probably want to be able to consider all that frame metadata. Make it part of your BUFFER data type.

One needs to make sure to match input and output formats as much as possible, and minimize glue-type color model and color space conversions. For varying tradeoffs of performance to accuracty, you could up-convert lower-precision image formats to larger range and higher-precision ones, assuming downstream filters supported those higher-precision ones. Given modern CPU and GPU SIMD capabilities, that might even be a recommended baseline for stock filters.

Additionally, it *might* be possible to use an optimizing compiler for the operation graph. From rearranging mathematically-equal filters and eliminating discovered redundancy to building filter machine code based on types and op templates. But that's delving into domain-specific language design, and not something I want to think too hard about at 4AM. In any case, it would likely be unwise to expose all but the most advanced users to the full graph, instead allowing the user interface to map more common behaviors to underlying code.

There's also clear opportunity for parallelism, in that the tree graph, being processed leaf-to-root, could have a thread pool, with each thread starting from a different leaf.

That's an image compositor. Just about any image editing thing you could want to do can be done in there. One exception I can think of are stereovision video, though the workaround for that is to lock-mirror the tree and have the final composite a map-and-join. (If you want to apply different fiters to what each eye sees, you're an evil, devious ba**ard. I salute you, and hope to see it.) Another is gain-type filtering, where a result from nearer the tree root could be used deeper in the tree (such as if you wanted to dynamically avoid clipping due to something you were doing, or if you simply wanted to reuse information you lost due to subsequent filtering or compositing steps). Still another is cross-branch feeding; I can think of a few interesting effects you could pull off with somethig like that. There's also layering and de-layering of per-pixel components.

As a bonus, it's flexible enough that you could get rid of that crap compisitor that's been sitting at the core of the GIMP for the past twenty years.

Monday, December 7, 2009

Flu

My grandmother clipped this out of some county newsletter and left it for me:

How do I know if I have the H1N1 flu?

The symptoms of this influenza virus are similar to just about every 'flu' bug out there.  Common effects are a cough, sore throat, headache, fever and chills, severe fatigue with body aches.  Some people are over the fever and chills phase in about 2 to 3 days while others suffer for more than a week.  Complications can occur days into the illness when lower chest congestion progresses to pneumonia.  This is a secondary bacterial infection that comes about because the immune system is so taxed fighting the flu it cannot fend off the bacterial pneumonia.  This disease is so prevalent that hospitals and the health department have stopped testing specifically for H1N1 2009. It is always coming back positive, so if you have the above symptoms you are presumed to have H1N1 influenza.

(Yeah, whoever wrote that article needs to work on their language skills.)

Cough, sore throat, fever and chills, severe fatigue with body aches. All there, at one point or another in the last two weeks. Pneumonia last week.

Doctor's appointment tomorrow, because I hate the idea of missing a third week of work.

Sunday, December 6, 2009

Thinking about backups.

Thinking about backups.

I've got a 3TB RAID5 volume (three 1.5TB disks) that reads between 150-200MB/s, but only writes at 25-50MB/s.

I would like to have full backup capacity of all 3TB of data, but the question becomes "how"?

If we assume that the reason for the slow write speed to the software RAID 5 array stems from parity calculation, then it stands to reason that a RAID 0 array wouldn't suffer the same speed limitation. Additionally, a RAID 0 array of two 1.5TB disks would hit a 3TB volume size, as opposed to requiring a third disk as in RAID 5.

I'm considering having a second, weaker box run software RAID0, and do a nightly rsync from primary box to the backup box. A dedicated 1Gb/s link would facilitate the copy.

If a drive in the RAID0 array fails, I replace it, rebuild and re-run the backup. If a drive in the RAID5 array fails, I replace it and rebuild. If the rebuild kills a drive and the RAID5 fails, I've got a backup. Meanwhile, I've got an isolated power supply, reducing the number of single points of failure. I'm using fewer drives in the backup machine, reducing cost. I'm reusing older hardware for the backup machine, reducing cost.

Tricky part is figuring out offsite backups from there, but my data isn't that valuable yet.

IR->Bluetooth->IR

You know what would be nice to have? A near-proximity IR->Bluetooth->IR adapter.

By "near-proximity", I mean having it attached to the original IR transmitting device, have an IR sensor, convert that to a 2KHz 1-bit bitstream, send that via BT to a receiver that converts it back to an IR transmission near whatever device needs to receive it.

Two significant problems remain: Bulk and power. For bulk, one could take advantage of paper-ICs or other film integration. (When I was a kid, I saw thick-film ICs dating back to the 70s. They may have been around longer than that.)

For power, I don't know. Probably the best way to go about it is to leech off the existing remote control's battery pack. I can think of a couple ways one might do that without interfering with the remote's internal expectations of its power source.

Of course, you could just build the thing into a lithium battery pack, rechargeable via USB, and tout it as both a range and life extension of the remote. Lithium power density is such that you might be able to pack the lithium, charging circuitry and bt transmitter in the space of a couple AA batteries. Some mechanical finagling and shoe-horning might be necessary to fit different battery compartment configurations.

Tis the Season to Barter

So my router was dropping 70-80% of packets, making it nightmarish trying to do anything via SSH. I called up a friend and asked him to pick up a cheap router from Best Buy, along with some new CAT6 ends (These are a *lot* nicer than the crap ones that took me 45 minutes apiece to do badly...).



Of course, an order like that comes to about $70, and I don't have that laying around in cash. PayPal was inconvenient for technical reasons, so we came up with a convenient solution...I just bought an equal amount of stuff from his Amazon wishlist, and am having it shipped directly to his registered address.

Friday, December 4, 2009

Migrating from a solid disk to software RAID.

So I've now got three 1.5TB disks in RAID 5, the array is assembled and running on bootup, and is mounted as my /home. Previously, I was using a 1TB drive for /home.

(I'm going to abbreviate this, including only the steps that worked, and initially omitting some of the excruciatingly long retries)

After installing the three disks into the machine, my first step was using Ubuntu's Palimpset Disk Utility to build the software RAID device. That took about seven hours, unattended.

The next step was copying my old /home filesystem to the new RAID array, using dd. That took about nine hours, unattended.

The next step was expanding the filesystem and tuning some parameters. ext3 isn't aware of the size of the block device it sits on, but it does remember a similar value related to that of the device it was created on. I had to use resize2fs to expand it from having sat on a 1TB volume to occupying a 3TB volume.

I looked at tune2fs and enabled a few options, including dir_index (I have a few folders with thousands of files in them), sparse_super (That saved a *lot* of disk space) and uninit_bg (Supposed to speed up fsck). I didn't read the man page clearly, and didn't discover until afterwards that by enabling uninit_bg, I'd given tune2fs the go-ahead to convert my filesystem from ext3 to ext4. Oh well...Seems to be working, and there are a few features of ext4 (such as extents) that I expect will come in handy.

The next step was to reboot and ensure that I could mount the array after rebooting; I didn't want some screw-up on my part to lead to all that time being wasted* by failing a RAID volume. After establishing I could mount it, it came time to modify mdadm.conf. and seeing that the array would come up on bootup. After that, all that was left was modifying /etc/fstab to mount the RAID volume at /home, rebooting, and restoring compressed tarballs and such from my overflow drive.

Filesystem            Size  Used Avail Use% Mounted on
/dev/md0 2.8T 1.2T 1.4T 47% /home


I've gone from having 8GB free on /home to having 1.4TB free. Can't complain.

root@dodo:/home/shortcircuit# dd if=/dev/md0 of=/dev/null
10985044+0 records in
10985043+0 records out
5624342016 bytes (5.6 GB) copied, 27.5768 s, 204 MB/s
18288001+0 records in
18288000+0 records out
9363456000 bytes (9.4 GB) copied, 46.1469 s, 203 MB/s
22992060+0 records in
22992059+0 records out
11771934208 bytes (12 GB) copied, 57.7066 s, 204 MB/s


Getting over 200MB/s in raw streaming read. Can't complain about that, either; I only read at about 70MB/s when pulling from a single (mostly) idle disk that's not part of the array.

Of course, it's not as good as I'd get with a hardware RAID card, but it's a heck of a lot better than I'd get otherwise. My comparative write speed sits down at about 25MB/s when dd'ing from a USB drive to the md0 device. I probably should have tried testing the write speed while reading from /dev/zero before putting the filesystem in place, but the bonnie disk benchmark at least gives some non-destructive results:

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
dodo 16G 38025 73 38386 8 25797 5 47096 85 161903 16 353.8 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
dodo,16G,38025,73,38386,8,25797,5,47096,85,161903,16,353.8,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++


For ext4 on top of the software RAID5 volume (consisting of three Seagate ST31500541AS), I get 38MB/s sequential output, 161MB/s sequential input, and 353 random seeks per second. Little to no difference between per-character writing and block writing.

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
dodo 16G 50964 96 61461 15 29468 6 49902 87 84502 6 202.1 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++


For ext3 on top of a single disk (a Seagate ST3500630AS), I get 51M/s sequential per-character write, 61M/s sequential block write, 50M/s sequential character read, 84M/s sequential block read, and 202 random seeks per second.

Long and short of it, a single block disk kicks my software RAID5 volume's butt for sequential write, but the software RAID5 blows away the single disk for sequential reads, and gets a 50% improvement over the single disk's random seek rate.

One thing I find particularly interesting about this is that the three disks in the RAID volume are 5900 RPM spindle speed, while that single disk is 7200 RPM spindle speed. I suppose having three heads is better than one. :)

Tuesday, December 1, 2009

Well, I went RAID.

Three 1.5TB 5900 rpm Seagates in software RAID 5. Write speeds as high as 50MB/s, so I'm not unhappy. In the process of dd'ing my old /home partition to it, and then I'll expand that filesystem to consume the whole 3TB volume.

As I only have 1.5TB of raw disk *not* part of the RAID, I'm going to peek at compressed filesystems for the 1TB disk. Trouble is, it takes 11 hours to copy a 1TB volume *to* the RAID via DD; I don't think my backups can be daily...

In other news, it's interesting watching SMART data on the drives. The "head flying hours" counter is already higher than the "power on hours", 2.2 days vs 1.8. Go figure.

Saturday, November 28, 2009

Seekable tarballs

In the Linux world, the most common archive tool is tar, hands down. You take a bunch of files, you throw them in a tar file, and then you compress the tar file with your compressor of choice. GZIP is the most common, leading to a .tar.gz/.tgz extension. BZIP2 is also common (.tar.bz2), but I've played around with LZMA (.tar.lzma) and rzip (.tar.rz) as well.

I'm only going to talk about gzip/DEFLATE because that's the only general compression algorithm I've considered in this approach.

It occurred to me there's a way to make a tarball (and, indeed, any DEFLATE stream) seekable. It stems from the reason DEFLATE isn't really seekable in the first place; The actual encoding of the data depends on data that it's already seen, so you can't just peek at any one place in the stream and start decoding there without knowing what came earlier.

In the case of DEFLATE, there's a compressor state that keeps changing as the compressor processes more data, in order to improve the compressor's efficiency. That state represents how the compressor will encode the symbols it sees in the near future, and the symbols that it sees will cause it to change its state to be more efficient for the symbols following.

In the process of decompressing a DEFLATE stream, that same state gets reconstructed, referenced and updated as a decode key as the stream continues. One can't normally jump to the middle of a DEFLATE stream because one needs to have that state in the form that it would be by that point in the stream.

The solution is simple; Save off a copy of the compressor/decompressor state wherever you want a seek point. Keep an index containing the original data stream address, the deflate stream address, and the decompressor state one would have at those two points.

Put the index in a separate file, after the terminator indicator, or multiplex it using some sort of container format; I don't care. Point is, you have the option of resuming a decompression run that you never really started.

Yes, this harms the compression efficiency, in a way. To seek, you need that decompressor state, and that decompressor state will have a nonzero size. No worries; There are all kinds of ways to tune how large the index will be:

  • You could set an index size as a percentage of the resulting deflate size. After building N bytes of deflate stream data, you save off an index point of M bytes, where (M/(N+M)) is your target percentage.

  • If you know enough about your source data stream, you could be selective about where you put the seek points. For example, if you know that the source stream is a tar file, you can put a seek point at the beginning of each file in the tar archive.

  • You don't have to generate the seek index at all until you'd like to start examining the file. Most of the tools I've used that allow be to browse tarballs decompress the entire archive into a temporary file or directory. In such cases, generating the seek index as an initial step saves the disk cost of a temp file or directory, and the seek index can be kept for later reference.


Another interesting side effect of the seek index is that it allows parallel decoding. If the underlying I/O subsystem is up to the task, multiple chunks of the overall DEFLATE stream may be decompressed simultaneously, with each seek index point as a potential starting point.

Friday, November 27, 2009

Thursday, November 26, 2009

Just watched the Pink Panther movie with Steve Martin in it.

Tradition says it can't hold up to the original (more serious comedy) or its Peter Sellers sequels (which were more slapstick), but I couldn't help but laugh at a few things.

"Don't you feel alone?"
"Not since I found the Internet."
Steve Martin does fine as the senseless, slapstick-mode Clouseau, and I think only my young age might let me get away with saying he's potentially on par with Peter Sellers in the role; I really think it's the script and the poorly-done special effects that bring down the movie. It does give an interesting perspective of the type of person this Clouseau might be, though: Focused on details, but sees a completely different set from those around him, and has misplaced priorities in other areas. Also has an intellectual understanding of form and social protocol, but a major disconnect between his intellect and actions.

Of course, the classic big band Pink Panther theme is a pleasure to listen to.

Monday, November 23, 2009

On RAID types, type classes, and reducing risk of catastrophic failure

RAID solutions like RAID 5 and RAID 6 are intended to improve data protection* while sacrificing less capacity than things like RAID 1.

RAID 5 allows you to put N+1 drives into a system, and get N drives' worth of capacity out. Meanwhile, you can lose a drive and still not necessarily lose your data. RAID 6 allows you to put N+2 drives into a system, and get N drives' worth of capacity out. Meanwhile, you can lose two drives out of the system and not necessarily lose your data.

The problem with RAID is that when you're down to N functioning drives, if you lose one more drive, you're in for a massive world of trouble; Good luck getting your data back. If you can even perceive your filesystem, you're going to be missing 1/N of just about each file. And likely not in a contiguous chunk.

So when you lose a drive out of a RAID system, you put one in, fast, and rebuild the array. Rebuilding the array populates the drive with what was in the old drive that went missing.** Once the rebuild is finished, and you're back up to N+1 drives (for RAID 5) or N+2 drives (for RAID 6), then everything should be back to normal; You just survived a drive failure (or two) without losing data.

The problem is that this rebuild process is hell on the drives; It involves a lot of reading of data from the remaining drives, in addition to their existing live load, to rebuild the data to put on the newly re-added drive. It's not unknown to have an additional drive failure during the rebuild period.

Part of the problem is that most of the drives in a fresh RAID setup will be new, which means that after one or two of the original drives have failed, the rest may not be far behind, which drives up the likelyhood of a failure during the rebuild.

So what if one were to induce a drive to fail earlier? I don't mean a time-decided or simulated failure, I mean a physical, time-unknown failure. Say, for example, that when setting up a new RAID, you put the drives through a burn-in period where you intentionally induce a high level of wear, such as by performing a nasty mix of random writes and random reads, inducing spindowns and spinups, etc.

Burn-in periods are already used in some places; They help weed out drives that are prone to early failure. However, if you give each of the drives in your array a different length of burn-in time, then you've reduced each drive's likely lifetime by a different amount, ideally by an exponentiated difference. That, in turn, means that if the drive with the longest burn-in period is the first in the array to fail, then the next drive to fail may be less likely to fail during the rebuild. Given enough of a difference in reduction of expected lifetime, one may even be able to procure something of a safety margin.

The sacrifice, of course, is that you're intentionally reducing the lifetime of your component drives, which means you put out more money in equipment replacement, and you rebuild your array more frequently.

The question is, is that additional equipment replacement cost and rebuild frequency sufficiently offset by the reduction in the likelyhood of having a drive failure reduce you to less than N working drives?

Some other thoughts:

RAID 0 is simple striping. You put N drives in, you get N drives' worth of capacity out, you get faster read times, and if you lose a drive, you've essentially lost all your data.

RAID 5 is similar to RAID 0 in that it uses striping, but an entire drive's worth of error-correction data is spread across all your disks so that if you lose a drive, you can retain your data. That means you get N drives worth of capacity for a system with N+1 drives.

RAID 6 is like RAID 5, but it uses a second drive's worth of data for error correction. You get N drives' worth of data for an array with N+2 drives, and you can lose 2 drives and still retain your data.

In all three of these cases, if you drop below N drives, you're pretty much hosed.

A second recap, more terse:
  • RAID 0: N drives, N drives capacity. Any drive loss means failure of the array.

  • RAID 5: N+1 drives, N drives capacity. Losing more than 1 drive means failure of the array.

  • RAID 6: N+2 drives, N drives capacity. Losing more than 2 drives means failure of the array.


Hopefully, you can see the abstraction I'm trying to point out.

Let's call RAID 0, 5 and 6 members of the same class of RAID array types, and note that for any*** array with N+x drives in the array, the array can withstand the loss of x drives before total failure.

In RAID 0, x is 0. In RAID 5, x is 1. In RAID 6, x is 2. It seems obvious that configurations are possible and practicable for the functionality of this class of RAID types where x may be greater than 2.

I would assume there's going to be a sacrifice in throughput performance as x increases, due to the calculation(writes) and verification(reads) of the error data. Just the same, the potential to increase x leads to the potential to increase N while reducing the additional risk that each increment of N brings.

That means an increase in the (acceptably un-)safe volume size with component drives below the maximum available, meaning component drives which aren't going to be the most expensive on the market. Likewise, as the data density of component drives reach an inevitable**** cap due to the laws of physics, one can select drive types with more weight given to component drive reliability.

* Yes! I know! RAID is not a backup solution. Now that that kneejerk reaction is out of the way...
** The information required to generate that data already exists on the other drives, assuming you haven't dropped below N active drives. Likewise, if one of those other drives were to die, the information on this drive can be used, in concert with the other remaining drives, to regenerate that drive's data.
*** OK, there's still a matter of the array being in a consistent state.
**** I'm not saying we're there yet. I'm not saying we'll be there within the next fifty years. I'm just saying it has to happen eventually.

Dice 1000

I don't know where it came from, but a piece of paper in one of my binders has the rules for a game written on it:

Required:
  • Paper
  • Pencil
  • 5 dice (I'm assuming d6s)


To start, have each player roll one of the dice and the highest roll goes first, cotinuing clockwise. The first player will roll all five dice. The scoring of the dice is:
  • 1 -- 100 points
  • 5 -- 50 points
  • 3 dice the same number -- number * 100


So if you roll three 2s, you would have 200 points, combination. If they stop at that point, they keep the total for that turn. If they roll again, they must roll dice that will add to the score or they lose that turn's score. If a player rolls all five dice and receives a non-scoring roll, they lose all accumulated points for the game. The first player to score 1000 is the winner.

Sunday, November 22, 2009

PulseAudio, sound daemons and screensavers

I had to debug a network issue that was caused by* PulseAudio (multicasting 1.5Mb/s of audio was saturating my wifi, which was preventing anyone from being able to get anywhere.), and in the process of learning about the problem to fix it, I learned something about PulseAudio.

It's amazingly over-engineered. Enough that it makes convenient things that wouldn't normally be possible. For example, any app can register itself as a source or sink. Registering as a source is kinda important for a sound daemon that wants to multiplex audio from multiple apps, but as a sink?

However, I keep thinking about media player visualizations and how most of them suck. Likewise, I think about screensavers and how they could be better.

Take a screensaver, have it register as an audio sink, and let the audio have an impact on the screensaver. For example, I'm looking at the "fiberlamp" screensaver. It looks like it uses some sort of a physics engine to have the fibers hanging down realistically, and when the screensaver starts, the thing starts off with a bit of a bounce before the whole thing settles.

You could vibrate the base of the fiberlamp in response to the sound fed out by PulseAudio, causing the fibers to shake and oscillate in response. You could take advantage of the fiber-optic metaphor, and feed a raster image into the base of the fiber bundle that looks like a more traditional visualization, so the fiber tips look like part of a stretched spherical mapping of that base.

There are a lot of possibilities when you can hook into a sound daemon like that.

* "caused by" is rather nebulous...One could just as easily point out that I shouldn't have enabled the multicast. Or I should have had the wifi block it. Yada yada.

A crazy idea

Construction paper is interesting stuff. What if one were to pulp several different colors, and feed those through what amounts to an inkjet?

You'd wind up with a color printout that has a very different tactile feel.

Friday, November 20, 2009

Wednesday, November 18, 2009

Ideas

"If you have an apple and I have an apple and we exchange apples then you and I will still each have one apple. But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas." - George Bernard Shaw



Now there's an idea I can get behind.

The Four Brethren


In the musty air
Above the misty lake
Sat the four brethren
A proud four stakes

Four stakes against water
Four stakes against ice
Four stakes against beasts birds and eyes.

They all knew their purpose
They each knew why
Each one shared a method--
Protection from the sky

So cozy for a mallard
So safe for the unspry
So guarded against malice
And hungry, prying eyes.

Sunday, November 15, 2009

If it ain't broke...

Engineer: ...don't fix it.

Venture capitalist: ...you're not thinking "out of the box" enough.

Salesperson: ...it'll be obsolete within the next six months, and your ROI will be far better with our newest model.

Agile programmer: ...write a test case for the current behavior.

Hacker: ...It's not fast enough.

Tester: ...hit it harder.

Manager: ...what is it?

End-user: ...why do I care?

Saturday, November 14, 2009

Thinking about the file cache

So, under Linux, just about any memory that's not actively used by applications eventually gets used by the file cache. The file cache keeps a copy of data from recently-used files in memory so that you don't need to read them from disk if you need them again.

One great way to visualize this process this is to install htop, add the memory counter, and configure that counter as a bar graph. The green portion of the graph represents memory actively used by applications, the blue portion represents buffers and such in use by the kernel(I'm still a tad unclear on this point; I think some shared memory mechanisms may be represented there), and the yellow portion is your file cache. The file cache will hold data chosen by some heuristic I'm unfamiliar with. I might describe it as "popular" or "recently-used", but it's really up to the kernel devs.

My desktop machine has 8GB of RAM. That's an insane amount, by any conventional reasoning; Even I'll admit that none of the applications I've run have used that much memory, 64-bit aware or no. However, again, any memory that's not being used by an application eventually gets used by the file cache, which means I eventually have about 6-7GB of file data cached in RAM. Believe me, it makes a difference when I'm not cycling through disk images tens of gigabytes long.

What if that file cache could be populated in advance? What if a filesystem could retain a snapshot of which files (or pages or sectors or blocks; However they organize the data in the cache.) were in the file cache at a particular time? I'm not talking about the file data itself, but pointers to that data. When the filesystem is mounted, assuming it's clean, the snapshot could be used for initially populating the filesystem cache.

At a naive level, the snapshot could be made when unmounting a write-enabled filesystem, though not when remounting to read-only. (That's a common failsafe approach for dealing with hardware blips, and it doesn't make sense to try to commit data to a potentially failing device..) When the filesystem is next mounted, the file cache state could be restored, immediately bringing recently-used files into memory. That will increase the mount time, but in a large number of use cases, it will improve the speed of file access. You could even choose to not restore that file cache state without any worries for data integrity.

More sophisticated approaches might allow the triggered switching of profiles. Let's say you use your system for web browsing as well as the occasional game, or even as a file server. You might have a different set of cache data for each purpose. Tie it to individual binaries, or even trigger it based on loading particular files, and be able to flush a large amount of data into the cache in anticipation of the workload historically seen associated with that application. Did gdm just start? Load all the GNOME pixmaps and sound into the file cache. Did Firefox just start? Load the theme data, plugins and that stuff under ~/.mozilla-firefox.

So long as the filesystem is aware of these cache profiles, it might even be able to take advantage of some of the free space on the disk to keep copies of the cached files in a contiguous place on the block device, to speed up loading of the cached data. If the data was modified, of course, the filesystem would have to rebuild the cache block at an idle time in accordance with system energy usage policy. (I.e. if you're on battery, you might only tack the modified version onto the end of the block. Or you might not rebuild the block at all until after you're wall-powered again.)

Thursday, November 12, 2009

Cables, frequency loss and equalization

So, apparently, different types and qualities of speaker cable will have different attenuation to different frequencies. Why not boost the signal at those frequencies being attenuated?



Imagine a receiver that you connect to a "stub" speaker for calibration. The receiver could pass a signal through the stub, measure the relative power loss at different frequencies and different drive levels, and build a model for a dynamic power-switched equalizer. (As a stage between the primary equalizer and the amp)



This would let you (mostly; Your frequency bucket size will still have an impact) eliminate your wiring as a source of frequency variation, though you'd still have to deal with general attenuation. Your initial eq is still in place, so you can still choose between your "jazz", "rock", and "movies" modes.

Home theater update: New placements, new wiring.

So I got the receiver and PS3 moved out of view, which is awesome; It reduces the number of non-presentation light sources.

I started with a 100ft spool of 16ga, but I was losing 30db between the receiver and the speakers. I just finished ripping that out and replacing it with 12ga copper.

New spool of speaker cable

wiring above receiver

Top two (copper) are 12ga rope-braided speaker wire. (Dayton SKRL-12-100 12) Pinned as a pair with 3/4" plastic cable staples, also picked up at Home Depot. They replaced some crappy 16ga I'd been feeding the front left and right channels, whose speakers consume up to 100W each. I was losing 30db across the 16ga; It's a 50ft run.

The bottom two are RG-6, using F-type-to-RCA adapters on either end; They're actually carrying headphone-level audio from the receiver to the subwoofer, whose internal amp has a built-in crossover. Once I get my receiver-side crossover set up, one will carry plain cable, while the other will carry LFE.

I did what I could to reduce cable strain.

The remaining crap isn't my wiring; It's network, power and cable.

Wiring leading to receiver and video source.

Top cable is HDMI, leading to the TV, pinned with 11mm round cable staples I picked up at Home Depot.

Next two (copper) are 12ga rope-braided speaker wire. (Dayton SKRL-12-100 12) Pinned as a pair with 3/4" plastic cable staples, also picked up at Home Depot. They replaced some crappy 16ga I'd been feeding the front left and right channels, whose speakers consume up to 100W each. I was losing 30db across the 16ga; It's a 50ft run.

The bottom two are RG-6, using F-type-to-RCA adapters on either end; They're actually carrying headphone-level audio from the receiver to the subwoofer, whose internal amp has a built-in crossover. Once I get my receiver-side crossover set up, one will carry plain cable, while the other will carry LFE.

Wiring leading to TV and speakers

Top cable is HDMI, leading to the TV, pinned with 11mm round cable staples I picked up at Home Depot.

Next two (copper) are 12ga rope-braided speaker wire. (Dayton SKRL-12-100 12) Pinned as a pair with 3/4" plastic cable staples, also picked up at Home Depot. They replaced some crappy 16ga I'd been feeding the front left and right channels, whose speakers consume up to 100W each. I was losing 30db across the 16ga; It's a 50ft run.

The bottom two are RG-6, using F-type-to-RCA adapters on either end; They're actually carrying headphone-level audio from the receiver to the subwoofer, whose internal amp has a built-in crossover. Once I get my receiver-side crossover set up, one will carry plain cable, while the other will carry LFE.

Hanging grayish cable is a plug for the light in the room. I've got it stapled, but I'd *really* like to be able replace the light and wire it in directly. At least I've got it stapled, so it's not hanging down as far...

View in the dark

Main menu in the Ghost in the Shell menu.

Only annoying light is the power LED on the TV, and I can take care of that with some black electrical tape.

There's also some reflection off the ceiling. Not sure what to do about that yet. May try using a black diffuse covering if/when I put some R13 fiberglass up there.

Tuesday, November 10, 2009

I know that I know nothing

"I know that I know nothing."

That's great. Now if only there were a way to get an enumerated list of everything I don't already know.

Hey, what's this machine you guys are bringing into the office? And what's with the wires sticking out of the fairy cake?

Sunday, November 8, 2009

I'm hungry. Dinner break.

Depart to partake in repast,
he said,
Depart to partake in repast!

In order to function tonight,
he said,
depart to partake in repast!

It's time to consume!
It's time to devour!
Depart to partake in repast!

Saturday, October 31, 2009

Cursing you and insulting your mother

So I started with this:

The only thing that can make a program user-unfriendly is if it's cursing you and insulting your mother. Otherwise, it's merely unintuitive, and that's a factor driven by unfamiliarity with the paradigm.

Too long...I need it down to 140 characters. Try this:

A program isn't user-unfriendly unless it's cursing you and insulting your mother. Otherwise, it's merely unintuitive, and that's a factor driven by unfamiliarity with the paradigm.

Still too long. How about this?

A program isn't user-unfriendly unless it's cursing you and insulting your mother. Otherwise, you're just not familiar with its paradigms.

Still too long. One more try...

A program isn't user-unfriendly unless it's cursing you and insulting your mother. Otherwise, you just don't know how to use it right.

Ok, that's short enough, but it's kinda clunky, and could be more succinct.

Unless it's cursing you and insulting your mother, you simply don't know how to use it.

Not what I was trying to get across...

Friday, October 30, 2009

D&D, pics, inspiration and scheduling philosophy



I happened across Pixdaus while following someone's friendfeed, and I subscribed to it's RSS feed.

It's a fast, fast RSS feed, and it's difficult to keep up with. However, I've been trying...A lot of what I've been seeing in it has been giving me genuine inspiration for settings, encounters, props and even campaigns for D&D. That, along with a blog post I recently read where the DM's roleplaying the giggling of some minor monstors got her players greatly and emotionally engaged in the combat. Roleplaying monster sounds? Why didn't I think of that? That could give me something about the combat side of things that I could enjoy.

It's sparked my interest in DMing again, and I'm slowly assembling a campaign in my mind. The next step is finding players and a suitable environment; GrandLAN, for its rich perpetual presence of players, was normally too noisy or cramped for comfortable play. I'm tempted to do hold it in my basement, where I can use my TV and sound system for still imagery and auditory props, but then I've got to worry about who can make it and when.

I still think that a "regularly scheduled" game is a bad approach. You can either count on a schedule, or you can count on the presence of players. Not both. Also, having variable time between games offers more opportunity to prepare and ensure an enjoyable session. I don't have a need to kill time; Like anybody else, I have precious little of that already. I have a desire to enjoy the game.

Drug User's guide to tokenizing a string

std::vector<CString> tokens;
int iToke = 0;
while (iToke != -1)
    tokens.push_back(list.Tokenize(_T("n"), iToke));

This sounds like fun.

Copied and copied and copied... here we go: Leave ONE WORD (in the comment section) that you think best describes me. It can be only one word. No more. Then copy and paste this on YOUR page so I may leave my one word about you.

Wednesday, October 28, 2009

Pinging

One of the more frustrating things about ping.fm, and pinging to multiple services (four for blogs, five for <140 char), is that people don't typically see the replies given them by people from the other services.

For example...Yeah, I know about water*. I've gotten that suggestion on at least three of the five services, I haven't checked one of the remaining two, and the other of the remaining two doesn't get any comments.

* I've been drinking a lot of it, especially after having quit caffeine. Right now, drinking the amount required to deal with the "hunger" would wash out an awful lot of water soluble vitamins.

Tuesday, October 27, 2009

On the subject of instructors...

I was just reminded of one of my favorite instructors in college. The class was titled "Performance Studies," and she was tough. The first day of class, she went on about how every one of us was going to have to get in front of the class and perform, how if we didn't have the assigment on-time, it was a zero, etc. etc. She was scary, and she meant every word of it.

Next class session, about half the students didn't show up; They dropped. They didn't want to do the work. Those that remained were either calling her bluff, or they were willing to do the work.

She wasn't bluffing. Though her tone softened, her policies didn't. We did the work, or we didn't get the grade. More dropped out, and most of the rest of us started enjoying the class more and more. By the end of class, there were fewer than ten of us left.

I've got some of their email addresses somewhere; I created a mailing list for the HU273 students for that session.

Oh, the instructor's name was Billie-Sue Berends.

Saturday, October 24, 2009

Star Trek

@coderjoe: "What does God need with a starship?!"
@kilocmdrlinn: "I was surprised you didn't get that reference."
@coderjoe: "I got that reference, I thought you didn't get it."
@mikemol: "Oh, come on, you guys couldn't remember the name of that movie for weeks."
@coderjoe: "We remembered it was The Undiscovered Country, we couldn't remember the number."

@kilocmdrlinn, @mikemol: "... ... BWAHAHAHA!"

Friday, October 23, 2009

Unobtainium

@coderjoe: "Wait. The Arcadia was just dragging along the ground there. How does it still have that mast?"
@coderjoe: "Oh, right. It cuts through the ground."
@kilocmdrlinn: "Right. It cut through rock like butter. It's made of not-gonna-break-this"
@mikemol: "Unobtanium"
@kilocmdrlinn: "Not unobtanium, unbreakable. You can't get in anywhere."
@coderjoe, @mikemol: "... ... BWAHAHAHA!"

@kilocmdrlinn: "Mike's got to record this sh*t. These nights are hilarious."


Consider it done. :)

Tuesday, October 20, 2009

Ok, for the last time. (Hopefully)

I run Windows at work. I code Windows apps in native-code C++. It's what I get paid to do. I currently run Linux at home, on pretty much anything with an x86 processor.

I love using Linux. I don't hate Windows. I can't give you specific reasons why any more.

If you're already a Linux desktop user, and have admitted that publicly, then there's a fair chance you hate Windows. That's your business. I really don't care.

You may or may not like using Linux; It's up to you to try it and find out for yourself. If you want, drop me an email and I can probably answer some general questions, and possibly where to find the answers to more specific ones.

You may or may not like Windows. It's up to you to try it and find out. If you don't like it, I don't need to know why.

Yes, Windows has its faults. Microsoft has its faults. Linux and all its distributions have their faults. If you want to expend your energy hating one of them, that's your business. I really don't care, and I'd rather not argue about it; Too much of both communities are full of fanboys and reactionaries against hype, and I can't argue against emotion.

Which text editor should you use? Whatever you're comfortable with. Which distribution should you use? Whatever you're comfortable with. Which programming language should you use? Whatever you're comfortable with? Which shell should you use? Whatever you're comfortable with. GNOME or KDE? I think you should see the pattern by now.

If you don't have experience with any of them, there's nothing for you to do but try it and see what fits.

Rosetta Code TODO list.

Things I need to do soon for Rosetta Code:

* Go through MediaWiki extensions currently in use, update. Remove any extensions not needed.
* Update MW itself.
* Update blog software.
* Build a list of simple maintenance tasks, and suggest a domain-specific language to control a bot that automates them.

Monday, October 19, 2009

Review of Ergo Proxy

I somehow managed to write this without any significant spoilers. Odd.

I finished watching Ergo Proxy this weekend.

Let's get the mundane bits out of the way first. Yes, Ergo Proxy is expensive to buy; It's produced by Geneon, whose always had a rather high price on their US sales. After watching EP, I'm starting to wonder if that's more due to the quality than trying to charge Japanese rates in a US market.* EP was a good buy for me, and it's probably going to be one of the earlier series I'll pick up on Blu-Ray when it becomes available.**

Artwork? Superb. Sterile environments look sterile. The very clean and safe appearance of Romdo dome public areas is simultaneously stifling and disturbing in its own way. Non-public-eye areas look gritty and dark.

Sound track? One of the absolute best ambient soundtracks I've heard. It supports the mood, but doesn't set it. It's also a massive pleasure to listen to on its own; There's very little in the middle or upper ranges, most of it is heard in the lower ranges, or felt, if you've got a subwoofer. If you listen carefully, you can pick up on the thematic melody common to several of the songs, the melody of the series itself, maybe.

Character development? Yes! The character development of Vincent Law feels staid and stagnant in a few ways, but they manage to explain that in the last two episodes. The character development of Re-l Mayer may seem slow, but that's primarily because there is an active component in the world holding her back, and when that component is removed, she begins to grow. Slowly at first, but then very quickly.

Theme? It has so many themes it's difficult to note all the important ones. I think there are at least three core themes to the show.

The question of the raison d'etre, the settings very literal interpretation of the meaning of life. Everyone in Romdo already knows their raison d'etre, their "truth", and it's knowing their raison d'etre that gives them a sense of place and duty. After all, if you know the meaning of life, why wouldn't you fullfill that meaning? One of the themes of Ergo Proxy is examining and questioning the importance of that raison d'etra, and even the importance of that questioning. If your mind isn't spinning, hold on.

The second core theme would have to be the question of reality. (mild spoiler) Several attacks against the core characters involve altering of perception, trying to break the characters' logic or emotions and to trick them into doing things that would be detrimental to them. Usually, the viewer themselves isn't even informed of this right away; We're left to be as lost, shocked, angered and confused as the main characters. If the character sees through everything, and avoids destroying themselves or the party, we find ourselves releived. We've just been through the same mindbender that the character was put through, and in some cases left to wonder if we would have made the same mistakes that the character made.

The third major theme is the question of the soul. What grants a soul? What are the consequences of having a soul? How might gaining a soul conflict with one's raison d'etra? In Ergo Proxy, having a soul is considered analogous to having emotion, or to having lost one's innocense. Much of the show revolves around the question of gaining a soul, be it human or AutoReiv, what one does when they lose their innocense, and the choices they makes as a result.

Even through all of that heavy thinking, there is a shining suggestion of hope. That it only takes very few benevolent individuals to save humanity, even if it takes them time to discover what that is.

I loved it. It wasn't "Awesome" in the Michael Bay sense. It wasn't awesome in the Incredible Suspects sense. It wasn't awesome in the Pixar sense. It was awesome in its own way.

* Anime in Japan tends to be incredibly expensive to buy on DVD.
**And it will; As long as someone is a licensed distributor in the US, some series will continue to be released in newer formats. After all, you can still get the African Queen on DVD. BD (or some other format that matches contemporary televisions) will be part of that cycle.

Friday, October 16, 2009

This one's pretty cool...



I've been seeing a bunch of things lately that have been giving me an itch to start DM'ing again, with mood enhancements. Prop photos like these make it seem even more interesting.

Thursday, October 15, 2009

Beeen a long while since I've done one of these.

Can you fill this out without lying? You've been tagged, so now you need to answer all the questions HONESTLY. Copy this entire message, then go to “notes” under tabs on your profile page, paste these instructions in the body of the note, delete my answers, and type yours. Easy peasy, lemon squeezy!


1. What was the last thing you put in your mouth?
Belgian waffle with blueberry and whipped cream

2. Where was your profile picture taken?
Dining room

3. Can you play Guitar Hero?
No

4. Name someone who made you laugh today?
Haven't yet.

5. How late did you stay up last night and why?
About 12:30. Watching video on my laptop.

6. If you could move somewhere else, would you?
I don't know. My two closest friends are in GRR, and I'd hate to leave them behind.

7. Ever been kissed under fireworks?
No. I'll do one further...It's been about four years since I've kissed...

8. Which of your friends lives closest to you?
Cojo, I think.

9. Do you believe exes can be friends?
Only if they're portable. However, if they manifest certain dependencies, all bets are off.

10. How do you feel about Dr Pepper?
Tastes meh.

11. When was the last time you cried really hard?
First week in September. Same day profile pic was taken.

12. Who took your profile picture?
I did.

13. Who was the last person you took a picture of?
I don't know; See my cosplay set on Flickr. Order by date taken. Figure out who it is.

14. Was yesterday better than today?
It's 9:42AM. The day is still young.

15. Can you live a day without TV?
Easily.

16. Are you upset anything right now?
As I said, the day is still young...

17. Do you think relationships are ever really worth it?
Absolutely/

18. Are you a bad influence?
It depends on what you think of as bad.

19. Night out or night in?
Usually in.

20. What items could you not go without during the day?
Either a camera(and batteries,card space) or a computer.

21. Who was the last person you visited in the hospital?
My grandmother.

22. What does the last text message in your inbox say?
Something about having paid my cell bill.

23. How do you feel about your life right now?
Like it's a bubble ready to burst.

24. Do you hate anyone?
No.

25. If we were to look in your social networking inbox, what would we find?
I don't know. I don't look at it often myself. I think there are 15 or so "unread" messages.

26. Say you were given a drug test right now, would you pass?
Depends...Is it a chemical-based test, or a communication-based test? Yes to the former. I'll just have fun with the latter...

27. Has anyone ever called you perfect before?
No.

28. What song is stuck in your head?
It's a Beautiful Morning.

29. Someone knocks on your window at 2:00 a.m., who do you want it to be?
I lack the social circle and suitable imagination required to be anything but shocked and disturbed by the concept.

30. Wanna have grandkids before you’re 50?
I'm 26, and I'm starting to think it's unlikely I'll even have kids before I'm 50.

31. Name something you have to do tomorrow?
Move a TV and sound system to an area in the basement. Better-contained sound means more effective volume.

32. Do you think too much or too little?
Yes.

33. Do you smile a lot?
Yes.

Monday, October 12, 2009

Another major component to self-organization completed

So the second major component to my organization system has been completed. The first was having a maintainable common place for keeping and finding papers; That was as simple as putting everything in plastic sleeves in three-ring-binders*. The second was having a maintainable common place for keeping and finding miscellaneous cables, adapters, components and other odds and ends.

I managed to do the latter in a space eighteen inches by thirty-seven. Here are the pics that illustrate how.

The next step is format-shifting all my CDs and DVDs so they're playable from my computer, and then I can put those in boxes and out of the way. After that, it's just a matter of continuing to put the laundry and trash into their correct places.

* No explicit organization there, just keeping them findable and browseable is sufficient, and has worked well over the past few months.

Saturday, October 10, 2009

And he slides to fail plate.

My new motherboard has some odd quirks. Like not being able to boot from SATA optical drives. And not being able to automagically boot from USB flash drives.

Instead, I had to tell the BIOS to emulate a hard disk interface for the USB flash drive, and then I had to remove the in-system SATA disk drive from the boot options, as it would bump the SATA disk to earlier in the boot attempt order. (And, apparently, that disk had an old GRUB MBR on it, so the BIOS thought everything was peachy keen...)

So I wound up eventually booting from the flash drive and installing my OS, with boot, root and swap partitions, and then it was time to install grub.

Wait a second...grub requires addressing based on the BIOS device listing order, and the device listings were currently out of whack because of an emulated disk and a temporarily-attached IDE disk.

I looked at grub's device.map, moved things around to how I thought the BIOS would map them after I removed the flash drive and IDE drive, and proceeded.

Ultimately, I rebooted, grub found my kernel (meaning I'd guessed the mappings correctly), and then...kernel panic.

I'd told grub that the root filesystem was on the swap partition.

Friday, October 9, 2009

A ballad of Rest.

This story beings,
late one night
At a little bar,
under dim light
I was feeling kinda itchy
And I wanted to make a request

So I knocked on the door
Guy looked at me
And I said, hey, it's Clem
Can't you see?
I've come to spend some money.
And he said
Four oh one.

So I stepped back
Grabbed my cell
Called a friend
Sweet little belle
Figurin' I'd treat that as a three-oh-five.

Few minutes later
Coulda' been ten
A "five oh four"
Is what it'da been
That door opened
Guy said "two hundred"

So in I walked
Smokey and dim
Sat down, looked around
And that was when
I saw a great little gal

I flagged her over
Gave her a tip
Asked her home
She bit her lip
Glanced around, and said "four oh two"

I was a bit startled
This wasn't my night
I looked around
Saw her pal
And I steeled myself for a fight

I said "five oh three"
She said "two oh six"
I said "four oh six"
And she threw a fit
That's when her pal made his way down.

He looked at me
Said "four twelve"
I looked at him
Then said, "well
I don't figure you'd take a three-oh-seven"

He grabbed my shirt
Said "five oh one
and you sir
are gonna be less one"
And he threw me across the room.

I landed on my feet
I was two hundred
But this situation
I'd two oh one'd
Was being two oh two'd
By the people in the room.

I looked around
I had three hundred.
First guy to move
I three oh one'd
Three oh two
Shouted "three oh three!"

Then I saw him
A clear three-oh-four.
I was quaking in my boots
Wanted three-oh-five
But tonight he was four oh four.

Things were gonna one-oh-one
But that was when
The lights turned out
And I four-ten'd
Thankful for the three oh seven.

I made my way home
My brain five-oh-three'd
And at my front door
Who did I see?
But that sweet little belle
And she was happy to see me.

She took me upstairs
Treated my wounds
Chastised me for my tastes
I fell in love
With a different three oh two.

I popped the question
Got a two oh two
Then I asked again
Got a two oh six
Again a while later,
I got a two oh one.

So we got married
And we had a kid
The story goes on
But one day I caught him with
A computer. And a half dozen programs.

I asked him, "Son,
What is it
That you want to become?"
And he said
"That's easy, pa!
A web designer!"

Well, I guess life is stranger
Than the stories inspired
'Cause that little boy
With all his desires
Wanted nothing more
than to code all day.

I looked at him
A twinkle in my eye
And I said, "Son,
I reckon this is why
God made the world this way."

Wednesday, October 7, 2009

The pain of "X is cheap"

There's a common assumption when developing on computers. "RAM is cheap", or "Disk is cheap", etc.

Certainly, it's cheaper than it was ten years ago. It's sure as heck cheaper than it was twenty years ago. But that doesn't make it cheap. For the idea I'm trying to plant in your head, there's no such thing as "cheap."

For something to be cheap, it must be affordable. For something to be affordable, there must be resources (i.e. money) to meet its requirements*. Since entrepreneurs have any of a various value of resources ranging from some number in the millions to as low as Zero dollars, that means that regardless of how cheap something is to one entrepreneur, it will be on the edge of affordability to others, and out of reach to still others.

So I suggest that making things more affordable (by way of reducing their inherent cost, not by subsidizing them), is not a goal that should be shrugged off just because some component of the inherent cost "is cheap" to get.

The other side of the argument might be that "if the entrepreneur can't afford something, then they need to fix their business model." Well, yeah, if their goal is to make money, then they should be continually improving their process of making money. That doesn't mean that the things they depend on should be allowed to lapse and become inefficient; If the tool can be more efficient then it poses a lower inherent cost. Ideally, an entrepreneur ought to be able to shop around and find a more efficient tool, but that's rarely an option for all of his tools. One tool may already be at the peak of potential efficiency, while another tool might have so many other perceived advantages that its inefficiency may be overlooked. Such as having an operating system with a massive base of available software, a programming language with a massive base of available libraries, or a software publisher that targets a demographic that buys on a whim.

As an anecdotal example, I'll bring up Rosetta Code, which is my own site. Recently, it came subject to a relatively massive sustained increase in traffic (several times its normal level), due to one of its pages becoming a hot item in StumbleUpon. The tiny 256MB Slicehost slice I was using simply didn't have the resources, despite my already having set up all practical caching mechanisms. Many of the users were getting HTTP 500 errors due to timeouts between Apache and fcgid. HTTP 500 errors aren't very descriptive so I changed the configuration over to mod_php. The default configuration of mpm_prefork allowed up to 250 clients to connect, and a process would be spawned for each of those clients. Each one of those processes tends to eat about 20MB of RAM, so with 250 clients being actively serviced at 20MB each on a virtual machine with 256MB of RAM, well, we were about four and a half gigabytes short. And then there's the database that didn't fit in memory as it was.

So I moved over to Linode, where I could get a better price/MB for RAM, took paints to configure and tune MySQL and Apache, and now the site runs fast enough that I can't overload it from my home internet connection. Yes, my MySQL configuration needed improving, and that improved performance.

But I want you to think about something...Why did Apache have to spawn a separate process for each client? Well, that's easy; I was using mpm_prefork, where that's the behavior by definition. But why was I using mpm_prefork? Because the PHP packages wouldn't allow me to use mpm_worker**. And why was that? Because the PHP core (or some of its possible extensions) isn't thread safe.*** Granted, coding in a thread-safe fashion is non-trivial for most coders of today's skill and/or experience, but I might go so far as to argue that there is very little about developing programming languages and their engines that is non-trivial to begin with.

I'm not writing this to criticize PHP specifically, I'm writing this to criticize a common theme behind problems it shares with many other programming languages and other software; It assumes X is cheap. X might be CPU. X might be RAM. X might be disk (though for disk not to be cheap in the context of web service languages, you're working in some fairly niche environments.)

In short, PHP is expensive in ways I hadn't sufficiently planned for, and its expense caused major issues for my site.

I believe what I'm describing is known as part of the "barrier to entry." What tool developers often forget is that while their tool may have all these cool dohickeys, whiz-bangs and context menus, those features will nearly invariably come at a cost that makes it more difficult for their potential customers to afford their product, either in the sense of purchase, or in the sense of execution. Even if they remember, I seriously doubt they think it's a major problem. "They need to fix their business model" is the resounding response I hear when one company or another industry complains that costs are too high, and that's why prices are as high as they are.

I already agreed that business models should be continually tuned and improved. What about the people who can't get into an industry because of the high barrier to entry? What about the industries that haven't been invented because the requisite tools are too expensive? The cheaper the tools you make, the more your tool can be abused to do something new or invent something new by someone who you weren't planning on trying to sell it to.

* And, no, I don't think there's any practical way of escaping that short of a shift in perspective so radical I haven't heard it yet.

** mpm_worker gives each client to a separate thread. Granted, the per thread heap allocation (which would have been most of it) doesn't go down, but code and data common to all of the threads needn't be loaded into multiple processes. (Granted, Linux may map shared libraries into one place in physical memory and put it in each processes's address space...But I' don't know.)

*** Of course, the obvious followup is "Why were you using PHP?" ... That's because I'm using MediaWiki. And if you want to critique me for using MediaWiki, I'll likely agree with your critiques, and be very happy to have a long discussion about why I continue to use it. (Please, feel free to do so; My primary reason is a lack of suitable alternatives, and any discussion I have with you may spark an alternative into existence.)

Tuesday, October 6, 2009

Ponytail and beard

I've had questions about my ponytail, and questions about my beard. Here's a pic that has both.

Profile

Monday, October 5, 2009

CDs

Victim: "Looks like I need to pick up some CDs"
Me: "Oh, great! How about some Aerosmith, some..."
Victim: "I meant for the company"
Me: "Ok, how about a three-month rotation?"
Victim: "I mean *those*" *points to a spindle of CD-Rs*
Victim: "Man, that was stupid of me"
Me: "What, using an acronym around me?"
Victim: "That too."

Saturday, October 3, 2009

Going to try to quite caffeine

So I've been irritable, angry, frustrated, stressed, bitter, impatient, unhappy and generally unpleasant to be around for the past several months, and it's grown worse in the last couple months. Part of it is workload, part of it is other stressors, but one likely significant factor has been that to cope with things, I've slept less, consumed more caffeine, and socialized even less.

There are a number of things I'm going to do to work on turning things around, but one significant one is going to be changing what I drink. I haven't drank sugared beverages for years, but I've generally drank caffienated ones, and this year switched things into higher gear with diet Mountain Dew instead of my normal diet cola, and consumed a greater volume of such syrup-formed goods in general.

I'm going to try cutting all of that out. I might allow sugar-sweetened drinks once in a while (the Sobe drinks are pretty good; My two favorites are the Green Tea and Lava flavored ones), but not often. And if you think it's hard finding a variety of diet sodas at restaurants of various types, try finding diet sodas without caffeine there.

I'm also looking at other "natural" remedies such as eating better and ensuring consumption of certain vitamins. (No, I won't be going on a vitamin B binge like I did with vitamin C before AWA. I would note, though that my roommate and carpooler for that trip got knocked out sick for a week and a half while I've been merely stressed out.

Quitting caffeine is going to be tricky. I already know I have a dependency; I get nasty headaches if I don't have any for a couple days. (And the only other kind of headache I occasionally get is dehydration) Started having one again Friday. Took an aspirin and it went away.

By George, I think I've found it!

So, yeah, the problem definitely appears to be my mainboard. Memtest results:

Memtest results 1

...And a more animated version:



(Facebook folk try here: http://www.youtube.com/watch?v=J31ut0Bhv8w)

That's after about three days of running memtest. If it had been a RAM fault, memtest would have counted the error and continued. If it had been a CPU fault, it likely wouldn't have rebooted when I hit escape. (After it rebooted, it launched straight back into memtest, and that's running currently.) And if it had just been the SATA controller portion of the chipset, the fault probably wouldn't have registered at all.

So it's time for a new mainboard. I'm looking for something with two 16-lane PCIe slots, a 2GBx4 arrangement of PC1066, and support for the Phenom 9650. And it ought to be able to stand running with less than a couple hours down-time per month. Everything else is pretty much secondary.

Friday, October 2, 2009

Thursday, October 1, 2009

SATA

Need recommendations for SATA controller cards. I don't plan to use their built-in RAID functionality. I need at least eight ports, but I'd be happy with two four-port cards.

Expected configuration will include two or three hard drives, and five DVD-ROM drives. Alternate solutions including fan-outs for the DVD-ROM drives would be welcome, as well. Trying to spend less than $150 on hardware hardware here; When I want hardware RAID, I'll spend on a decent card.

Wednesday, September 30, 2009

What Obama's doing

Ever hear of the Gordian Knot? Read up on it if you haven't.

The vast majority of politicians who try to solve a problem try to do so in a fashion that avoids breaking as many assumptions about the System as possible. Budget, things that depend on existing infrastructure, political clout and favor, etc. etc. "Broad, sweeping changes" are usually little more than patching a subsystem of government, without doing anything that would disrupt other components. Because making the changes that are actually campaigned on are either difficult or impossible to do without burning through political capital, throwing the "budget" out of anticipated patterns, or disrupting something else that people like just the way it is.

People simply don't see the disconnect between what they vote a candidate in for, and what the consequences of that candidate actually following through on anything. I'm not saying Obama did or did not follow through on any campaign promises, just that people don't realize that they vote for what they'll find difficult or unpleasant to experience the implementation of.

If you're familiar with the Gordian Knot by the time you're reading this paragraph, you can probably already see where I'm going. So here it is. By throwing budgetary caution to the wind and implementing the policies and programs he wants implemented, Obama is cutting the Gordian knot. Don't bother untying it, just cut it in half and lace things back together where necessary.

I don't know what the fallout is going to be. I don't think anyone knows to totality, but I can at least say things are going to be different. He was voted in on Change, and that's Change for you...

Liberals and Conservatives

Of the three to five places I posted this, it seems someone misunderstood what I was saying in each place. So I'll be more verbose than 140 characters...

"Liberal" denotes a philosophy where law is aggressively formed and applied to address observed problems. "Conservative" denotes a philosophy where new law may be formed and applied, but not aggressively; This philosophy tends to prefer solutions that do not involve the creation of new law.

My assertion was that people who subscribe to the conservative philosophy would subscribe to the liberal philosophy if they didn't already have what they want; I.e. if they weren't comfortable with the status quo as it currently enabled them. I might go so far as to say that both of these philosophies are moderate ones. The non-moderate form of conservatism using this measurement metric might be libertarianism or anarchism; The idea that law needs to be repealed in a general sense until there is little or no government law in effect. The non-moderate form of liberalism is totalitarianism; The idea that the Government knows best*, and so they should regulate everything.

And "independents?" A misnomer in this context. I'm not talking about political parties, I'm talking about political philosophies.

Granted, all generalizations are false**. It's simply something to think about.

Me? I'm not liberal or conservative. I'm not red or blue. I'm not Republican or Democrat. And though I'm not a card-carrying member of any party, I won't call myself an "independent"...Most independents seem to me to be unofficial and unaware party members. I'm jaded enough about the parties in power and the colors on the tickets that I don't think there's any party that fits more than a third of my personal views.

* Including this one.
** The reasoning for "government knows best" varies from perspective to perspective. It might be "government knows best because it's a representative democracy, and thus carries the will of the People", or it might be "government knows best because scientists and economists drive the decisions." There are other reasons for non-government people to think "government knows best".

A folk approach to the BSOD

(My boss thinks I only listen to techno and electronica, for some reason. This started forming in my head while I was thinking about work today.)

Now, I was sittin' there doing my job as a plain-jane userland programmer. I was lookin' to the side for a moment, and I noticed som'n out of the corner of my eye. Som'n...blue. I looked at the screen. Was it the sysinternals screensaver? No...It only affected two out of the four monitors.

It was the
Blue screen of death
It came to bother me
Blue screen of death
Its code had set it free!
Blue screen of death
That color's pretty nice
Blue screen of Death
But it's comin' for your life!

Now ya gotta code real fast
And ya gotta code real hard
But if you don't code it right
It won't get very far!

Hardware and software
It's all the same to me
Drivers and firmware
It's borked as you can see!

It was the Blue screen of Death
It came to bother me
Blue screen of Death
Its code had set it free!
Blue screen of Death
That color's pretty nice
Blue screen of Death
But it's comin' for your life!

Now, y'all might be wonderin' why I'm telling you all this. Y'see, it's for your own good. When you don't mind your dwords and your qwords, when you forget that your calling convention is more than the introduction when the other guy picks up the phone, when you think a pointer is just a piece of advice, that there's when you're gunnin' for some trouble. Yeah, that's right. You're breaking *my* computer. And I don't take too kindly to that.

That there's the
Blue screen of death
It's come to bother me
Blue screen of death
Your code has set it free!
Blue screen of Death
That color's pretty nice
Blue screen of Death
But it's comin' for your life!

Now you gotta code real fast
And ya gotta code real hard
But if you don't code it right
You won't get very far!

Hardware and software
It's all the same to me.
Drivers and firmware
You broke it, don't you see?!