Observations:
* X10 was once used to allow remote control of power outlets. It operated by adding a small set of pulsed high-frequency modulations on top of your home's internal AC. Take a peek at it with an oscilloscope some time; it's pretty cool.
* UPS devices report themselves well over USB, but that takes up one of a computer's external USB ports.
* In most systems I've seen and built, there are spare USB headers left on the board after building a system.
Proposal:
Taking these three in context, consider having a UPS self-report its status by doing a controlled modulation of its AC output. Have a computer's power supply be capable of receiving and interpreting these signals. Have that power supply plug into a spare USB header on the motherboard, to provide those communications onward to the system.
Benefits:
* Avoid using an external USB port
* Allow multiple computers powered off of the same UPS to be simultaneously aware of their power source's state.
Drawbacks:
* AC output of UPS can't be a perfect sine wave, much as that would be ideal. (Though most output a stepped approximation, anyway.)
* Requires that the PC's power supply be able to interpret the signals.
--
:wq
Friday, March 26, 2010
Thursday, March 25, 2010
For the last time, no, I Won't be a fan of "Google Fiber for Grand Rapids"
(This is going out to a whole bunch of places, not just Facebook.)
I Won't become a fan of Google Fiber for Grand Rapids, for a couple reasons. Let's start with the big one: I don't want my Internet traffic primarily channeled by a company whose primary revenue source stems from figuring out how to show me the ads I'm most likely to click on. That's a very, very direct incentive for them to do traffic analysis and directly watch the sites I visit so that they can figure out what I'm interested in, and thus what I might buy, and thus what ads to show me.
No, AdBlock doesn't fix things; avoiding being shown ads wouldn't stop the incentive for analysis any more than a boycott on Wendy's for serving meat helps PETA.
I'm sorry, but I'm just not interested. Even if they promise not to do privacy-invasive things, they ultimately can't help it; it makes too much business sense for them to set up a transparent HTTP proxy and build up their Analytics and Adsense statistical bases. *I* would, if I were them. If they don't, then they're stupid; they would be ignoring the opportunity to blend together geographical and demographic awareness into their ad business, and sell localized Adsense ads with better accuracy than the ones you already see. (From Jenison, I get offers to "meet singles in Kalamazoo" ... Kalamazoo is a long ways away from me, but happens to be where the other end of this DSL connection sits.)
So, no, I'm not a fan of Google Fiber.
The second reason I won't become a fan of Google Fiber is the same reason I don't plaster my car with bumper stickers; I don't care to broadcast my taste, appreciation, like or dislike of every single cause, organization or business that flies across my radar.
--
:wq
I Won't become a fan of Google Fiber for Grand Rapids, for a couple reasons. Let's start with the big one: I don't want my Internet traffic primarily channeled by a company whose primary revenue source stems from figuring out how to show me the ads I'm most likely to click on. That's a very, very direct incentive for them to do traffic analysis and directly watch the sites I visit so that they can figure out what I'm interested in, and thus what I might buy, and thus what ads to show me.
No, AdBlock doesn't fix things; avoiding being shown ads wouldn't stop the incentive for analysis any more than a boycott on Wendy's for serving meat helps PETA.
I'm sorry, but I'm just not interested. Even if they promise not to do privacy-invasive things, they ultimately can't help it; it makes too much business sense for them to set up a transparent HTTP proxy and build up their Analytics and Adsense statistical bases. *I* would, if I were them. If they don't, then they're stupid; they would be ignoring the opportunity to blend together geographical and demographic awareness into their ad business, and sell localized Adsense ads with better accuracy than the ones you already see. (From Jenison, I get offers to "meet singles in Kalamazoo" ... Kalamazoo is a long ways away from me, but happens to be where the other end of this DSL connection sits.)
So, no, I'm not a fan of Google Fiber.
The second reason I won't become a fan of Google Fiber is the same reason I don't plaster my car with bumper stickers; I don't care to broadcast my taste, appreciation, like or dislike of every single cause, organization or business that flies across my radar.
--
:wq
Saturday, March 20, 2010
[TSoC] A different kind of cell phone service
Anyone notice that cell providers these days tend to not charge roaming fees? Anyone notice that their phone itself might report itself as roaming as often as not, without it making a practical difference? What about pre-paid phones? How many of those aren't tied to a particular network at all?
If a cell service provider can provide phone service that costs the same whether you're on their towers or not, what's to stop a towerless cell provider from providing service using a phone that doesn't care whether it's connected to the service provider via CDMA, GSM or an encrypted SIP connection tunneled across Wifi or WiMAX? Heck, any of the existing SIP/IAX2 trunking providers could potentially expand into that arena, and people wouldn't necessarily require a POTS phone number to call or be called. (Though, rather than calling a Skype ID, someone could simply call me at phone.michael.mol.name.)
Admittedly, toggling between cell and 802.11abgn radios will reduce battery life, and the phone would have to be a tad smarter to manage more of the service provider hand-offs itself, but that price is already being paid by smartphone owners.
Ordinarily, I'd be one of those arguing back "you can take my landline out of my cold, dead hands", but I've somehow managed to never have a landline to my name, nor one that people call when they want to reach me since before I even turned 18. With the drop in landline usage, and the pervasive increase in mobile phone usage, it strikes me as imminently doable.
--
:wq
If a cell service provider can provide phone service that costs the same whether you're on their towers or not, what's to stop a towerless cell provider from providing service using a phone that doesn't care whether it's connected to the service provider via CDMA, GSM or an encrypted SIP connection tunneled across Wifi or WiMAX? Heck, any of the existing SIP/IAX2 trunking providers could potentially expand into that arena, and people wouldn't necessarily require a POTS phone number to call or be called. (Though, rather than calling a Skype ID, someone could simply call me at phone.michael.mol.name.)
Admittedly, toggling between cell and 802.11abgn radios will reduce battery life, and the phone would have to be a tad smarter to manage more of the service provider hand-offs itself, but that price is already being paid by smartphone owners.
Ordinarily, I'd be one of those arguing back "you can take my landline out of my cold, dead hands", but I've somehow managed to never have a landline to my name, nor one that people call when they want to reach me since before I even turned 18. With the drop in landline usage, and the pervasive increase in mobile phone usage, it strikes me as imminently doable.
--
:wq
Friday, March 19, 2010
Have I fallen so far?
I once commanded legions of GI Joe action figures; they would march to their death at my guidance and whim. Have I fallen so far?
Water once obeyed my guidance, flowing through pipe, hose and tube before sprinkling me amidst glorious sunshine. Have I fallen so far?
The sands themselves once moved aside under my machinations; the formation rivers, gorges, dams and lakes occurred at my influence. Have I fallen so far?
The frozen wastelands would strip themselves bare, the ice and snow forming men as tall as taller than I was. Have I fallen so far?
The trees would shed their leaves, simply for my joy and benefit. Have I fallen so far?
Yes, I have fallen, but the world has turned up-side down. It is now my role to serve those who will come after me. I have risen far.
--
:wq
Water once obeyed my guidance, flowing through pipe, hose and tube before sprinkling me amidst glorious sunshine. Have I fallen so far?
The sands themselves once moved aside under my machinations; the formation rivers, gorges, dams and lakes occurred at my influence. Have I fallen so far?
The frozen wastelands would strip themselves bare, the ice and snow forming men as tall as taller than I was. Have I fallen so far?
The trees would shed their leaves, simply for my joy and benefit. Have I fallen so far?
Yes, I have fallen, but the world has turned up-side down. It is now my role to serve those who will come after me. I have risen far.
--
:wq
Thursday, March 18, 2010
[Photography] 16:9
I want to get into photographing at a 16:9 aspect ratio.
When composing for 4:3, the rule of thumb is to divide your image into a 3x3 grid, and keep a distinct element of the picture in each grid component. I suspect that, in a 16:9 aspect ratio, the composition rule might best change to 5x3. I already know how I'd want to use the six extra grid squares in various centered and offset scenarios.
Anyone know of anamorphic lenses for photography? I haven't upgraded from my FujiFilm Finepix yet, so I'm pretty flexible as far as compatibility...
When composing for 4:3, the rule of thumb is to divide your image into a 3x3 grid, and keep a distinct element of the picture in each grid component. I suspect that, in a 16:9 aspect ratio, the composition rule might best change to 5x3. I already know how I'd want to use the six extra grid squares in various centered and offset scenarios.
Anyone know of anamorphic lenses for photography? I haven't upgraded from my FujiFilm Finepix yet, so I'm pretty flexible as far as compatibility...
[Linux, libvirt, kvm, snapshots] Piecing together an open-source equivalent of VMWare Workstation
Well, first, I should mention I've been spending a lot of time in KVM lately; paycheck depends on being able to code on Windows, and I don't have any hardware I want to put Windows on the bare metal of. libvirt+KVM works nicely as a virtual machine, and some parts of its feature set rival that of VMWare Workstation right out of the box. For example, the latest version of Workstation I have access to doesn't support direct access to PCI devices. KVM does (though I haven't played with it). Of course, there are things that Workstation does better than libvirt+kvm, as well. Let's see what's missing:
* Snapshots, full clones, linked clones
* Resize guest display to match viewer viewport. (I had to resort to some XML editing in order to get a video device I could make larger than 1024x768, and that's still not as nice as having the guest framebuffer resize automagically)
* Drag-and-drop between guest and host.
* Application-in-host (Workstation calls this "Fusion", and it's awesome if you need it. I still need to send that fruit basket to the couple developers who wrote that feature.)
* Suspend guest VM.
* Has D3D9 acceleration (for Windows guests on Windows hosts, which wouldn't work for me anyway), which is better than the no-graphics-hardware-acceleration I face with libvirt+kvm. (There is an SDL client, but libvirt's dynamic privilege toying with PolicyKit doesn't manage to get it to work on my system.)
Things that libvirt+KVM does better than Workstation (at least as of the latest version I have access to)
* Will give as many as 16 virtual processors to a guest.
* The VM definition file is XML, and well-documented on their website; manual tweaks are pretty easy.
* Has the ability to not affix a MAC address to a guest. While this causes Windows guests to destroy old NICs and create new ones every time you reboot, it will make cloning much easier.
So let's see what we can do about the things that libvirt+kvm doesn't quite do on it own yet.
D3D9 Acceleration
Not going to happen; host doesn't have Windows. On the other hand, I can put in another video card and give a guest VM direct access to it. I haven't tried it, but it could work, so long as I don't allow the host's X server to use that card.
Susped guest VM
Not going to happen until there's a to specify a file to hold a RAM image, at least. (I could easily see them mmaping that file, too; it seems like a perfectly kosher way to leverage the x86 MMU on 64-bit systems.) No such option exists, currently.
Application-in-host
On Windows and Linux guests both, this requires some awareness of and integration with the guest window manager. There are probably VNC servers that can handle that individually, and there are already VNC servers for both guest operating systems that will serve up individual windows. If you put a VNC client on the host in Listen mode, you could find a way to have every launched application connect to that client. It'd require some real script-fu in the guest, though, and there's also the problem that VNC doesn't support drag-and-drop between clients. (And typically isn't aware of it at all...)
Drag-and-drop between guest and host
That'd require some extension to the VNC protocol, so I can't do that right now. (OTOH, it would be a very, very good general enhancement to the protocol!)
Resize guest display to match viewer viewport
VMWare uses the same core protocol between its viewing interface and its virtualized video card, and its viewing interface supports the viewport and guest triggering each other to resize. (Admittedly, I don't know for certain that the viewport-triggers-guest event is communicated using the same channel.) It seems likely that it could be done, but will require an extension to the VNC protocol.
Snapshots, full clones, linked clones
Some of this is something I might actually be able to do soonish (and I'll certainly be working on it!) Linux has LVM, which, at the very least, will let me take a snapshot of a base volume, and do copy-on-write for maintaining things. I don't know that it will let me take a snapshot of a snapshot, though, or give me simultaneous r/w access to multiple snapshots of the same base view. If it does, that would put it pretty close to VMWare Workstation's capability. If not, it puts me closer to VMWare Server's. VMWare Workstation allows you to take a snapshot after running on a snapshot, which gives you base->snapshot->snapshot, and you can choose to run any of those you like without affecting the others. It also allows you to create a linked clone of any of them, which amounts to creating a snapshot that you can run at the same time as you run a snapshot from that tree. VMWare Server, on the other hand, only has two states for a VM: current snapshot, and current state. In VMWare Server, you can always replace your current snapshot with your current state ("Take snapshot") or your current state with your current snapshot ("Revert to snapshot"), but that's it.
Notice I'm not talking about VMWare Server ESX*; that beast's snapshot model is a little closer to Workstation's.
Out of all of these things, the snapshots and clones are the part I need to work on the most. Right now, snapshots and clones are pretty much the same thing; copy the disk image, copy the XML file, and modify the XML file to point to the copied disk image's location. That gets pretty expensive, diskwise, pretty fast. LVM's copy-on-write should help with that a lot. So would a filesystem with built-in deduplication, but ext4 (what I'm running on right now) doesn't have that. (I can think of a bunch of other reasons a filesystem with built-in deduplication would be awesome, too. Maybe I'll brainstorm on some different performance tradeoff option ideas in the near future.)
--
:wq
* Snapshots, full clones, linked clones
* Resize guest display to match viewer viewport. (I had to resort to some XML editing in order to get a video device I could make larger than 1024x768, and that's still not as nice as having the guest framebuffer resize automagically)
* Drag-and-drop between guest and host.
* Application-in-host (Workstation calls this "Fusion", and it's awesome if you need it. I still need to send that fruit basket to the couple developers who wrote that feature.)
* Suspend guest VM.
* Has D3D9 acceleration (for Windows guests on Windows hosts, which wouldn't work for me anyway), which is better than the no-graphics-hardware-acceleration I face with libvirt+kvm. (There is an SDL client, but libvirt's dynamic privilege toying with PolicyKit doesn't manage to get it to work on my system.)
Things that libvirt+KVM does better than Workstation (at least as of the latest version I have access to)
* Will give as many as 16 virtual processors to a guest.
* The VM definition file is XML, and well-documented on their website; manual tweaks are pretty easy.
* Has the ability to not affix a MAC address to a guest. While this causes Windows guests to destroy old NICs and create new ones every time you reboot, it will make cloning much easier.
So let's see what we can do about the things that libvirt+kvm doesn't quite do on it own yet.
D3D9 Acceleration
Not going to happen; host doesn't have Windows. On the other hand, I can put in another video card and give a guest VM direct access to it. I haven't tried it, but it could work, so long as I don't allow the host's X server to use that card.
Susped guest VM
Not going to happen until there's a to specify a file to hold a RAM image, at least. (I could easily see them mmaping that file, too; it seems like a perfectly kosher way to leverage the x86 MMU on 64-bit systems.) No such option exists, currently.
Application-in-host
On Windows and Linux guests both, this requires some awareness of and integration with the guest window manager. There are probably VNC servers that can handle that individually, and there are already VNC servers for both guest operating systems that will serve up individual windows. If you put a VNC client on the host in Listen mode, you could find a way to have every launched application connect to that client. It'd require some real script-fu in the guest, though, and there's also the problem that VNC doesn't support drag-and-drop between clients. (And typically isn't aware of it at all...)
Drag-and-drop between guest and host
That'd require some extension to the VNC protocol, so I can't do that right now. (OTOH, it would be a very, very good general enhancement to the protocol!)
Resize guest display to match viewer viewport
VMWare uses the same core protocol between its viewing interface and its virtualized video card, and its viewing interface supports the viewport and guest triggering each other to resize. (Admittedly, I don't know for certain that the viewport-triggers-guest event is communicated using the same channel.) It seems likely that it could be done, but will require an extension to the VNC protocol.
Snapshots, full clones, linked clones
Some of this is something I might actually be able to do soonish (and I'll certainly be working on it!) Linux has LVM, which, at the very least, will let me take a snapshot of a base volume, and do copy-on-write for maintaining things. I don't know that it will let me take a snapshot of a snapshot, though, or give me simultaneous r/w access to multiple snapshots of the same base view. If it does, that would put it pretty close to VMWare Workstation's capability. If not, it puts me closer to VMWare Server's. VMWare Workstation allows you to take a snapshot after running on a snapshot, which gives you base->snapshot->snapshot, and you can choose to run any of those you like without affecting the others. It also allows you to create a linked clone of any of them, which amounts to creating a snapshot that you can run at the same time as you run a snapshot from that tree. VMWare Server, on the other hand, only has two states for a VM: current snapshot, and current state. In VMWare Server, you can always replace your current snapshot with your current state ("Take snapshot") or your current state with your current snapshot ("Revert to snapshot"), but that's it.
Notice I'm not talking about VMWare Server ESX*; that beast's snapshot model is a little closer to Workstation's.
Out of all of these things, the snapshots and clones are the part I need to work on the most. Right now, snapshots and clones are pretty much the same thing; copy the disk image, copy the XML file, and modify the XML file to point to the copied disk image's location. That gets pretty expensive, diskwise, pretty fast. LVM's copy-on-write should help with that a lot. So would a filesystem with built-in deduplication, but ext4 (what I'm running on right now) doesn't have that. (I can think of a bunch of other reasons a filesystem with built-in deduplication would be awesome, too. Maybe I'll brainstorm on some different performance tradeoff option ideas in the near future.)
--
:wq
Tuesday, March 16, 2010
[Techtalk Tuesday] Bittorrent and swarm stability
I don't know if I've blogged on this before, but it's been on my mind a while. I've noticed that Bittorrent swarms tend to be unstable.
Bittorrent chops up the data into multiple pieces, and then the various clients in the swarm play mix and match until they have all the pieces they need for a full copy. Then (if they're well behaved) they stick around for a bit longer and give more copies of the pieces to other members of the swarm.
When a client joins the swarm, it gets a mapping of all the pieces in the swarm, and a listing of who has copies of which pieces. It then asks whoever it can for the pieces it still needs to complete its copy.
The problem I've observed is that if one piece has copies in more clients than another piece, then that first piece will tend to be copied to more clients still, simply by being more available, while that rarer piece becomes more relatively rare. This causes data distribution in swarms to grow "clumpy", with the availability gap between the most common and the least common piece growing wider and wider. If the only copies of a piece exist on swarm members which already have full copies, and those members drop out, then the swarm can't form more complete copies until someone with a full copy rejoins the swarm. Meanwhile, the rest of the swarm members copy from each other until they're all not-quite-complete, but don't get any farther than that.
What I'd like to suggest are three changes to the Bittorrent swarm behavior, both on the "server" and "client" side of the connection.
The change to the client is pretty simple; look at the map of available pieces and nodes, and try to grab the rarest ones first. Even if the client is misbehaved and doesn't plan on contributing to the health of the swarm, it still makes sense to grab the rarest pieces before they may disappear.
The first change to the server is a bit trickier, and would require support from the client. If the client asks for piece A, the server should give it piece A--after it gives it piece B, which is currently rare in the swarm, and it knows the client doesn't already have it.
Aside from requiring the client to recognize that the data it was handed wasn't initially the data it asked for, I can see one other problem with the latter approach. If the client is already asking for piece B from another node, then getting a copy of piece B from the node from which it asked for piece A is a waste of time and bandwidth. It could inform the server which pieces it's grabbing, but that would also be loss to efficiency. A compromise might be met where the swarm would only support slipping in pieces from the rarest N% set of pieces, and so the client would only be inclined to report any from that set that it was already grabbing.
Varying the client's pull pattern to either fall within, overlap with, fall near or be distant from that N% segment each have their advantages and disadvantages, but I'm not sure which would outweigh the others. On one hand, avoiding the N% bracket is a destabilizing influence on the swarm, but saves incident overhead. On the other hand, favoring the N% bracket means the client grabs the rarest packets quickly, but increases the incident overhead. Favoring just outside the N% bracket means avoiding the incident overhead, but destabilizes the swarm, and makes the assumption that there are a number of "slipping" seeders.
The second change to the seeder side would be to *proactively* push data to other clients, based on observed connectivity--to some extent, each client is aware of the bandwidth and uptime of other clients. If a client is observed to have a lot of bandwidth available to it, or is observed to be a stable member at a lower level of bandwidth, it makes sense for the swarm to push rarer pieces to places where they can be copied from more quickly. (For the stable, low-bandwidth clients, this had the added benefit of reducing their particular risk of missing out on rare pieces even more.)
Just some thoughts...
--
:wq
Bittorrent chops up the data into multiple pieces, and then the various clients in the swarm play mix and match until they have all the pieces they need for a full copy. Then (if they're well behaved) they stick around for a bit longer and give more copies of the pieces to other members of the swarm.
When a client joins the swarm, it gets a mapping of all the pieces in the swarm, and a listing of who has copies of which pieces. It then asks whoever it can for the pieces it still needs to complete its copy.
The problem I've observed is that if one piece has copies in more clients than another piece, then that first piece will tend to be copied to more clients still, simply by being more available, while that rarer piece becomes more relatively rare. This causes data distribution in swarms to grow "clumpy", with the availability gap between the most common and the least common piece growing wider and wider. If the only copies of a piece exist on swarm members which already have full copies, and those members drop out, then the swarm can't form more complete copies until someone with a full copy rejoins the swarm. Meanwhile, the rest of the swarm members copy from each other until they're all not-quite-complete, but don't get any farther than that.
What I'd like to suggest are three changes to the Bittorrent swarm behavior, both on the "server" and "client" side of the connection.
The change to the client is pretty simple; look at the map of available pieces and nodes, and try to grab the rarest ones first. Even if the client is misbehaved and doesn't plan on contributing to the health of the swarm, it still makes sense to grab the rarest pieces before they may disappear.
The first change to the server is a bit trickier, and would require support from the client. If the client asks for piece A, the server should give it piece A--after it gives it piece B, which is currently rare in the swarm, and it knows the client doesn't already have it.
Aside from requiring the client to recognize that the data it was handed wasn't initially the data it asked for, I can see one other problem with the latter approach. If the client is already asking for piece B from another node, then getting a copy of piece B from the node from which it asked for piece A is a waste of time and bandwidth. It could inform the server which pieces it's grabbing, but that would also be loss to efficiency. A compromise might be met where the swarm would only support slipping in pieces from the rarest N% set of pieces, and so the client would only be inclined to report any from that set that it was already grabbing.
Varying the client's pull pattern to either fall within, overlap with, fall near or be distant from that N% segment each have their advantages and disadvantages, but I'm not sure which would outweigh the others. On one hand, avoiding the N% bracket is a destabilizing influence on the swarm, but saves incident overhead. On the other hand, favoring the N% bracket means the client grabs the rarest packets quickly, but increases the incident overhead. Favoring just outside the N% bracket means avoiding the incident overhead, but destabilizes the swarm, and makes the assumption that there are a number of "slipping" seeders.
The second change to the seeder side would be to *proactively* push data to other clients, based on observed connectivity--to some extent, each client is aware of the bandwidth and uptime of other clients. If a client is observed to have a lot of bandwidth available to it, or is observed to be a stable member at a lower level of bandwidth, it makes sense for the swarm to push rarer pieces to places where they can be copied from more quickly. (For the stable, low-bandwidth clients, this had the added benefit of reducing their particular risk of missing out on rare pieces even more.)
Just some thoughts...
--
:wq
Monday, March 15, 2010
Memetic Mutation
We're no strangers to memes
You know the rules, and so do I
A full mutation's what I'm thinking of
You wouldn't get this from any other guy
I just gotta tell you what I'm thinking
Gotta make you lose the game.
You know the rules, and so do I
A full mutation's what I'm thinking of
You wouldn't get this from any other guy
I just gotta tell you what I'm thinking
Gotta make you lose the game.
Unqualified domains
So I've got a home network and my own bind install. Let's say my local domain is x, so dodo might be dodo.x, alice might be alice.x, and whiterabbit would be whiterabbit.x. If I ping alice.x or alice.x, they both resolve to the same IP address.
Now let's throw another domain in the mix, accessible over VPN. This domain is also unqualified, and let's call it y, and say it has hosts alpha, beta and gamma, for alpha.y, beta.y and gamma.y.
Now, in Windows, I can tell the system to try resolving unqualified domains with a domain of x first, followed by y. That way, I can ping alpha, and it resolves to alpha.y, and I can ping whiterabbit, and it resolves to whiterabbit.x.
Under Linux, when I try to ping gamma, it doesn't resolve, but when I try to ping gamma.y, it does. Under a Windows VM routed through the same machine, pinging gamma resolves to the same thing as gamma.y. How do I get the Linux host to exhibit the same behavior? (No, I'm not going to route the host's DNS through the Windows guest.)
--
:wq
Now let's throw another domain in the mix, accessible over VPN. This domain is also unqualified, and let's call it y, and say it has hosts alpha, beta and gamma, for alpha.y, beta.y and gamma.y.
Now, in Windows, I can tell the system to try resolving unqualified domains with a domain of x first, followed by y. That way, I can ping alpha, and it resolves to alpha.y, and I can ping whiterabbit, and it resolves to whiterabbit.x.
Under Linux, when I try to ping gamma, it doesn't resolve, but when I try to ping gamma.y, it does. Under a Windows VM routed through the same machine, pinging gamma resolves to the same thing as gamma.y. How do I get the Linux host to exhibit the same behavior? (No, I'm not going to route the host's DNS through the Windows guest.)
--
:wq
Sunday, March 14, 2010
[StepMania] I've still got it. [blogging] LJ and ping.fm
So I fired up StepMania again on Saturday. It'd been over a year since I'd played. Life's been busy, opportunity slim, the pad needed tuning, and opportunity has been slim.
I dropped into stepmania a couple weeks ago and got the compile issue resolved; they gave me a revision to update to that worked great. Saturday, I tuned the pad.
Fired up StepMania, and all was good.
I gave a run through a few DDR 2nd mix and DDR 3rd mix songs, Franka Potente's "Believe", and a couple variants of Sonic the Hedgehog songs. (Most of those songs make pretty good hard techno. I'd venture a guess it's because of the hardware they were originally written to play from.)
I've discovered that the only thing that really stops me from getting a good score on a five-foot song is how many songs I played leading up to it; my read speed and agility are good, but I need to work on my endurance so I can get more songs out.
I also need to work on breathing while I dance; three breaths for a 120s song is probably not a healthy rate. This is supposed to be aerobic exercise, not anaerobic.
In other news, I've finally got posting to LJ working decently. I write to my post-by-email address, copy the message body into a text editor, replace n (or rn, if I'm on a Windows box) with
, and paste that into ping.fm. Then I send the original message.
Conveniently, this gives me a place to put draft posts, even though I'm using ping.fm. Very nice.
(And while LJ strips my email signature, I leave it in for the ping.fm posts. I like it. That's why it's my signature.)
--
:wq
I dropped into stepmania a couple weeks ago and got the compile issue resolved; they gave me a revision to update to that worked great. Saturday, I tuned the pad.
Fired up StepMania, and all was good.
I gave a run through a few DDR 2nd mix and DDR 3rd mix songs, Franka Potente's "Believe", and a couple variants of Sonic the Hedgehog songs. (Most of those songs make pretty good hard techno. I'd venture a guess it's because of the hardware they were originally written to play from.)
I've discovered that the only thing that really stops me from getting a good score on a five-foot song is how many songs I played leading up to it; my read speed and agility are good, but I need to work on my endurance so I can get more songs out.
I also need to work on breathing while I dance; three breaths for a 120s song is probably not a healthy rate. This is supposed to be aerobic exercise, not anaerobic.
In other news, I've finally got posting to LJ working decently. I write to my post-by-email address, copy the message body into a text editor, replace n (or rn, if I'm on a Windows box) with
, and paste that into ping.fm. Then I send the original message.
Conveniently, this gives me a place to put draft posts, even though I'm using ping.fm. Very nice.
(And while LJ strips my email signature, I leave it in for the ping.fm posts. I like it. That's why it's my signature.)
--
:wq
Saturday, March 13, 2010
Applying StumbleUpon to code.
While I don't know any of the specifics behind StumbleUpon's recommendation and rating systems, I've observed a large number of up-trends and down-trends of StumbleUpon referral traffic while watching Rosetta Code's analytics. Based on my observations of how traffic flow works, I suspect the same system could be adapted to random-walking code bases.
Why would you want to random-walk a code base? I can think of a couple reasons. First, you're bored, stuck or stalled, and you need a distraction from the function you're working on. Second, perhaps you need to familiarize yourself with the code base a bit better better by randomly jumping around to different files, classes and functions.
Here are a few ways to identify a place in code, thinking strictly from a C++ perspective. (Other languages have their own ideal ways of thinking about code at a component level):
* File::Line
* Class
* Function
At a slightly broader perspective (from a Visual Studio perspective. Other development environments have their own terminology):
* Project
* Solution
Getting meta:
* Bug report
* Bug comment
Of course, a line in a file probably exists in association with a class, and with a function. So you would likely have an implicit relationship there. A class or function is likely to have a relationship with a project and solution (or several, if the file it exists in is shared among multiple projects). So there are more implicit relationships. A bug report may have some association with one of the other location identifiers. More implicit relationships.
So that's the content side of things. What about the user side of things?
StumbleUpon is Yet Another Social Network. You have contacts whose likes and dislikes affect where their recommendation algorithm sends you, and your likes and dislikes affect their browsing experience.
While the "social networking is going to solve all your business problems" fad went away with the dot-com bust, there's still some value in positional association. Let's say you've got a project development team of four people, and they all tend to work on the same product or product component. Or maybe you've got a company with thirty developers, and they're all working on the same gigantic codebase. That's akin to StumbleUpon's contacts.
While they're random-walking through their codebase, the developers' individual interests and disinterests would draw each others' attention to various parts of the code base, not as something to direct their focus on, but as something to let their brains passively absorb and toy with while they're taking a step back from the draining problem they're already focused on.
Anyway, just a thought.
--
:wq
Why would you want to random-walk a code base? I can think of a couple reasons. First, you're bored, stuck or stalled, and you need a distraction from the function you're working on. Second, perhaps you need to familiarize yourself with the code base a bit better better by randomly jumping around to different files, classes and functions.
Here are a few ways to identify a place in code, thinking strictly from a C++ perspective. (Other languages have their own ideal ways of thinking about code at a component level):
* File::Line
* Class
* Function
At a slightly broader perspective (from a Visual Studio perspective. Other development environments have their own terminology):
* Project
* Solution
Getting meta:
* Bug report
* Bug comment
Of course, a line in a file probably exists in association with a class, and with a function. So you would likely have an implicit relationship there. A class or function is likely to have a relationship with a project and solution (or several, if the file it exists in is shared among multiple projects). So there are more implicit relationships. A bug report may have some association with one of the other location identifiers. More implicit relationships.
So that's the content side of things. What about the user side of things?
StumbleUpon is Yet Another Social Network. You have contacts whose likes and dislikes affect where their recommendation algorithm sends you, and your likes and dislikes affect their browsing experience.
While the "social networking is going to solve all your business problems" fad went away with the dot-com bust, there's still some value in positional association. Let's say you've got a project development team of four people, and they all tend to work on the same product or product component. Or maybe you've got a company with thirty developers, and they're all working on the same gigantic codebase. That's akin to StumbleUpon's contacts.
While they're random-walking through their codebase, the developers' individual interests and disinterests would draw each others' attention to various parts of the code base, not as something to direct their focus on, but as something to let their brains passively absorb and toy with while they're taking a step back from the draining problem they're already focused on.
Anyway, just a thought.
--
:wq
Friday, March 12, 2010
Hardware RNGs
Generation of good random data is hard. Even operating systems have a tricky time doing it. Tools like puttygen implore you to move your mouse around a lot while it's generating a key, in order to add entropy to the system. Generating a gpg key has similar problems, except that the tool discards data given to it that's not random enough, and this drains the kernel of entropy used to feed the internal PRNG. (Read the man page; it warns you about it.) If you're on Linux, and you want to see this in action, do something like this:
"cat /dev/urandom"
You'll get a flood of garbage data, then a trickle, then it will stop. You've drained the kernel's entropy pool. Move your mouse around (you're not doing this on some server, right?), and you'll see the trickle resume.
So you can get good random numbers by reverse-biasing a diode and listening for thermal noise, but you only get those numbers at a low rate.
That has to be the most trivially parallelizeable hardware problem I've read of in ages. How many millions of transistors can we fit in a 1mm
"cat /dev/urandom"
You'll get a flood of garbage data, then a trickle, then it will stop. You've drained the kernel's entropy pool. Move your mouse around (you're not doing this on some server, right?), and you'll see the trickle resume.
So you can get good random numbers by reverse-biasing a diode and listening for thermal noise, but you only get those numbers at a low rate.
That has to be the most trivially parallelizeable hardware problem I've read of in ages. How many millions of transistors can we fit in a 1mm
Thursday, March 11, 2010
So the new campaign world...
I'm building a new campaign world. I don't know that I'll be able to fully describe this without pics, but here goes:
(Note, this isn't a final description, even of the parts I'm describing. Things are subject to change.)
Premise: The material plane is formed by the intersection of several other planes. The intersection is caused by a loop of (something) that acts somewhat like a gravitational field in a multidimensional space, drawing all the planes together, with the strongest point of intersection following the ring. (Not the interior of the ring, but the ring edge itself.) The ring itself is deep underground. (How deep? I dunno. Deep enough for dwarves to get at it, anyway.)
Whenever I talk about the ring, I'm talking about the actual torus itself, not the entire region it encompasses. Also, N, for now, is 20. I may scale it up or down. I don't know. "I" needs to be calculated based on the concept of the inverse cube law mentioned below, but I don't feel like doing the calcs right now.
There are six primary zones, relating to the ring:
* A2 -- Region 0-I miles from the ring.
* A1 -- Region I-2I miles from the ring.
* B -- Region 2I-(N-2I) miles from the ring.
* C1 -- Region (N-2I)-(N-I) miles from the ring.
* C2 -- Region N-(N-I) miles from the ring.
* D -- Region greater than N miles from the ring. There are two of these, one on the outer side of the ring, and one circular region on the inside of the ring, a sort of interior dead zone.
Notable artifacts of this:
* Magic is strengthened in proximity to the ring, though the strengthening effect fades according to an inverse-cubed law. (See "inverse square law", and bump it up a dimension)
Region D: magic has faded to uselessness, and the material plane has faded to void. Not space, not vacuum, just void. No rules of physics apply there. Some who've touched it are said to have ascended, while most are never heard from again.
Region C2: the material plane takes on a strong form of the chaos property. (It's been a while since I read the old 3.0 D&D splatbooks, but I remember there being a chaos plane or some such.) These are known as the "outer strangelands."
Region C1: the material plane takes on a weak form of the chaos property. These are known as the "inner strangelands."
Region A2: the material plane takes on the strong form of the "wild magic" property.
Region A1: the material plane takes on the weak form of the "wild magic" property.
Region B: the material plane behaves normally.
* Climate varies greatly near the ring
Region A1: Small, adjacent regions have very, very different climates. Storms are common as these incompatible climates mix at their boundaries.
Region A2: As with A1, but the climate regions are much larger.
Other notable things about the ring:
There are two tall mountains on the "south east" corner. One of them I've nicknamed "nattie" for the time being. Deep in the core of Nattie is an area where the hypermagic ring is twisted into a Gordian knot, leading to stronger localized effects. Closer to the ring's center is another mountain, which I've nicknamed "Everest." At an altitude fairly close to Nattie's peak, the "sun" (some bright light source; I haven't figured out exactly how hot it is) follows the hypermagic ring as something like a track. Day and night in the world are simply the sweeping of Everest's shadow as one, large sundial. The "dead zone" region at the center of the ring also casts a shadow; the lack of remotely consistent physics prevents light from traversing it. As a result, most of the world has two periods of darkness for every cycle of the sun.
There is one small anomaly at the "north west" corner. Dwarves dug deep under the surface of the world, and managed to intentionally create a loop twist of their own in one spot on the hypermagic ring. The hypermagic ring would very much prefer to remain circular, and so it will untwist itself eventually, given the chance. When that happens, all sorts of things will happen. That's not a key specified plot point, by the way; it's just a fact of the world. You make your own plots; I'm just defining the world at a static point.
There is one ocean covering the "south west" quadrant of the world, lined with a short region of hills followed by a mountain range. (The same mountain range that contains Nattie, Everest and the Dwarven magic project. The ocean extends from the outer C2 through the outer C1, B, A2, A1, inner A1, inner A2 and into the inner B regions.
The oceanic section of the A2 and A1 regions are considered perilous, and almost nobody crosses it except in times of emergency. Think of it like the dead region of wind flow near the equator as feared by wind-powered sailors hundreds of years ago, except that instead of a lack of motive energy, magic just goes wonky there.
Instead, trade between the south and west edges of the ocean follows two primary routes. One that sticks to the outer B region, and one that sticks to the inner B region. The Inner Sea is relatively crowded, with privateers and pirates, but also a large number of coastal trade points. The Outer Sea is fairly empty, but there's no room for error; if you have problems, or if you run low on supplies, you're either going to have to cross the A1/A2 regions to get to the Inner sea where you might find help, or you're going to be lucky enough to be near shore or "Atlantis" (nickname, natch) the island city that spans from the outer B region to the outer A2, used as a nigh-lawless research, trade and pirate stronghold.
At the border of outer B and outer A1, on the ocean shores, are the two largest port cities. The Inner Sea has ports that are far more numerous and dispersed.
...and that's all I have figured out for now.
(Note, this isn't a final description, even of the parts I'm describing. Things are subject to change.)
Premise: The material plane is formed by the intersection of several other planes. The intersection is caused by a loop of (something) that acts somewhat like a gravitational field in a multidimensional space, drawing all the planes together, with the strongest point of intersection following the ring. (Not the interior of the ring, but the ring edge itself.) The ring itself is deep underground. (How deep? I dunno. Deep enough for dwarves to get at it, anyway.)
Whenever I talk about the ring, I'm talking about the actual torus itself, not the entire region it encompasses. Also, N, for now, is 20. I may scale it up or down. I don't know. "I" needs to be calculated based on the concept of the inverse cube law mentioned below, but I don't feel like doing the calcs right now.
There are six primary zones, relating to the ring:
* A2 -- Region 0-I miles from the ring.
* A1 -- Region I-2I miles from the ring.
* B -- Region 2I-(N-2I) miles from the ring.
* C1 -- Region (N-2I)-(N-I) miles from the ring.
* C2 -- Region N-(N-I) miles from the ring.
* D -- Region greater than N miles from the ring. There are two of these, one on the outer side of the ring, and one circular region on the inside of the ring, a sort of interior dead zone.
Notable artifacts of this:
* Magic is strengthened in proximity to the ring, though the strengthening effect fades according to an inverse-cubed law. (See "inverse square law", and bump it up a dimension)
Region D: magic has faded to uselessness, and the material plane has faded to void. Not space, not vacuum, just void. No rules of physics apply there. Some who've touched it are said to have ascended, while most are never heard from again.
Region C2: the material plane takes on a strong form of the chaos property. (It's been a while since I read the old 3.0 D&D splatbooks, but I remember there being a chaos plane or some such.) These are known as the "outer strangelands."
Region C1: the material plane takes on a weak form of the chaos property. These are known as the "inner strangelands."
Region A2: the material plane takes on the strong form of the "wild magic" property.
Region A1: the material plane takes on the weak form of the "wild magic" property.
Region B: the material plane behaves normally.
* Climate varies greatly near the ring
Region A1: Small, adjacent regions have very, very different climates. Storms are common as these incompatible climates mix at their boundaries.
Region A2: As with A1, but the climate regions are much larger.
Other notable things about the ring:
There are two tall mountains on the "south east" corner. One of them I've nicknamed "nattie" for the time being. Deep in the core of Nattie is an area where the hypermagic ring is twisted into a Gordian knot, leading to stronger localized effects. Closer to the ring's center is another mountain, which I've nicknamed "Everest." At an altitude fairly close to Nattie's peak, the "sun" (some bright light source; I haven't figured out exactly how hot it is) follows the hypermagic ring as something like a track. Day and night in the world are simply the sweeping of Everest's shadow as one, large sundial. The "dead zone" region at the center of the ring also casts a shadow; the lack of remotely consistent physics prevents light from traversing it. As a result, most of the world has two periods of darkness for every cycle of the sun.
There is one small anomaly at the "north west" corner. Dwarves dug deep under the surface of the world, and managed to intentionally create a loop twist of their own in one spot on the hypermagic ring. The hypermagic ring would very much prefer to remain circular, and so it will untwist itself eventually, given the chance. When that happens, all sorts of things will happen. That's not a key specified plot point, by the way; it's just a fact of the world. You make your own plots; I'm just defining the world at a static point.
There is one ocean covering the "south west" quadrant of the world, lined with a short region of hills followed by a mountain range. (The same mountain range that contains Nattie, Everest and the Dwarven magic project. The ocean extends from the outer C2 through the outer C1, B, A2, A1, inner A1, inner A2 and into the inner B regions.
The oceanic section of the A2 and A1 regions are considered perilous, and almost nobody crosses it except in times of emergency. Think of it like the dead region of wind flow near the equator as feared by wind-powered sailors hundreds of years ago, except that instead of a lack of motive energy, magic just goes wonky there.
Instead, trade between the south and west edges of the ocean follows two primary routes. One that sticks to the outer B region, and one that sticks to the inner B region. The Inner Sea is relatively crowded, with privateers and pirates, but also a large number of coastal trade points. The Outer Sea is fairly empty, but there's no room for error; if you have problems, or if you run low on supplies, you're either going to have to cross the A1/A2 regions to get to the Inner sea where you might find help, or you're going to be lucky enough to be near shore or "Atlantis" (nickname, natch) the island city that spans from the outer B region to the outer A2, used as a nigh-lawless research, trade and pirate stronghold.
At the border of outer B and outer A1, on the ocean shores, are the two largest port cities. The Inner Sea has ports that are far more numerous and dispersed.
...and that's all I have figured out for now.
Putting together a new campaign world
I put together the premise behind a new "sandbox"-style campaign, along with the geologic, botanic and economic portions of the map. Need to do the demographic portion, still, as well as follow the intersections into the other planes. Then comes the political, followed by finishing the biome and building the NPC cast. I'm going to need HTML, CSS, JS and transparent PNGs to tie this all together, legibly.
Wednesday, March 10, 2010
Tuesday, March 9, 2010
So I've put in nearly 3000 miles on my car in the last two weeks.
Two solid runs of 1000 miles, a solid(ish) run of about 450 miles, and, just Monday morning, a solid run of about 300 miles.
I think she (the car) likes it; she didn't give me a single complaint. I like driving distance. I'm looking forward to putting more miles on her at the right opportunities.
I have a family member who's pushing her car (A '91) to hit the 500k mark, and it's almost there. My 1996 Buick Regal has about 130k on it, but I'm not pushing for a mile marker, I'm pushing for 2016, when it will be 20 years old. Theoretically, it'd be eligible for a Michigan historical vehicle license. I bet I can still have her running as well as she is today. I get anywhere from 26 to 38 mpg, depending on road conditions and how I drive, but I haven't tracked my mileage too closely.
I like my car. We work well together.
I think she (the car) likes it; she didn't give me a single complaint. I like driving distance. I'm looking forward to putting more miles on her at the right opportunities.
I have a family member who's pushing her car (A '91) to hit the 500k mark, and it's almost there. My 1996 Buick Regal has about 130k on it, but I'm not pushing for a mile marker, I'm pushing for 2016, when it will be 20 years old. Theoretically, it'd be eligible for a Michigan historical vehicle license. I bet I can still have her running as well as she is today. I get anywhere from 26 to 38 mpg, depending on road conditions and how I drive, but I haven't tracked my mileage too closely.
I like my car. We work well together.
Monday, March 8, 2010
Magnetized gorillapod
So my brother asked me if I could cross a camera magmount with my gorillapod, and I've been considering how to do that.
The tricky part is replacing a gorillapod foot with a magnet. My current though is to take the existing foot off the link chain, put the head of a bolt in the recess left behind, and secure the bolt by filling the recess with some sort of ceramic or plaster. Then I'd screw on one of these.
The only thing I really don't like about it is that, compared to my existing magmount, which uses one of these rubber feet, the magnetized gorillapod is going to be hell on surfaces I attach it to; K&J doesn't make the rubber feet for the right size of magnet, and I definitely don't need the 75lb of surface force provided by their 32mm mounting magnets, which is the smallest size they offer the rubber feet for. (I use one of those 32mm magnets as the basis to my existing magmount, and that's more than enough to keep my camera affixed to the top of my car while driving down the freeway. (By the way; the reactions you get from people when you start drivingdown the road with what looks like a DSLR on top of your camera can be hilarious; the magmount is low profile enough to not be obvious. I should paint the camera orange/black or something so people get the idea that it's supposed to be there.) So I've got two things to resolve before I build it. First, how to make plaster (or some better material equally as cheap to get/make/use.) Second, How to put a protective coating on the magnets so they don't scratch painted surfaces.
The tricky part is replacing a gorillapod foot with a magnet. My current though is to take the existing foot off the link chain, put the head of a bolt in the recess left behind, and secure the bolt by filling the recess with some sort of ceramic or plaster. Then I'd screw on one of these.
The only thing I really don't like about it is that, compared to my existing magmount, which uses one of these rubber feet, the magnetized gorillapod is going to be hell on surfaces I attach it to; K&J doesn't make the rubber feet for the right size of magnet, and I definitely don't need the 75lb of surface force provided by their 32mm mounting magnets, which is the smallest size they offer the rubber feet for. (I use one of those 32mm magnets as the basis to my existing magmount, and that's more than enough to keep my camera affixed to the top of my car while driving down the freeway. (By the way; the reactions you get from people when you start drivingdown the road with what looks like a DSLR on top of your camera can be hilarious; the magmount is low profile enough to not be obvious. I should paint the camera orange/black or something so people get the idea that it's supposed to be there.) So I've got two things to resolve before I build it. First, how to make plaster (or some better material equally as cheap to get/make/use.) Second, How to put a protective coating on the magnets so they don't scratch painted surfaces.
The problem I see with IPETEE
So I just stumbled across this article. It's funny, cause I gave it some thought a few months ago, and there are a few problems with end-to-end encryption as a means of concealment.
First, activity indicates activity. So you connect to a tracker that manages prohibited data. Prohibited how? Doesn't matter. Whether encrypted or not, you have a connection between you and a server managing prohibited data.
Ok, fine. You jump that hurdle by using something like Tor.
Second, there's more than just data, there's metadata. So you don't know what exactly is in a TCP or UDP stream, but you can learn things about it, just the same. What do you know about the packet size? Time between packets? Is a constant data rate maintained? How long does the connection last? Certain types of traffic aren't bandwidth-sensitive, but rather latency-sensitive. VOIP is going to have a distinct pattern. Torrenting is going to have a specific pattern. It's techniques like these that allow forensic specialists to detect hidden TrueCrypt volumes.
Third, encryption doesn't eliminate the data, it just obscures it, and not perfectly. Even aside from the possibility of cracking the encryption key, it's possible to guesstimate what kind of data an encrypted bitstream represents. I've read of that being used on encrypted hard drives, for example.
First, activity indicates activity. So you connect to a tracker that manages prohibited data. Prohibited how? Doesn't matter. Whether encrypted or not, you have a connection between you and a server managing prohibited data.
Ok, fine. You jump that hurdle by using something like Tor.
Second, there's more than just data, there's metadata. So you don't know what exactly is in a TCP or UDP stream, but you can learn things about it, just the same. What do you know about the packet size? Time between packets? Is a constant data rate maintained? How long does the connection last? Certain types of traffic aren't bandwidth-sensitive, but rather latency-sensitive. VOIP is going to have a distinct pattern. Torrenting is going to have a specific pattern. It's techniques like these that allow forensic specialists to detect hidden TrueCrypt volumes.
Third, encryption doesn't eliminate the data, it just obscures it, and not perfectly. Even aside from the possibility of cracking the encryption key, it's possible to guesstimate what kind of data an encrypted bitstream represents. I've read of that being used on encrypted hard drives, for example.
Thursday, March 4, 2010
Ground drove my computer loopy.
So I've spent the last seven hours trying to get my desktop to boot something, anything, that's a full operating system. For much of the time, the system was hanging randomly between POST and loading the boot sector, and for much of the rest of the time, it was consistently hanging when I tried to load a Linux kernel. I even wound up going so far as to flash my BIOS, because I couldn't think of anything else to try that made any sort of sense.
The flash upgrade had gotten me to the point where I could at least load boot sectors again, and I was able to run memtest off of live CDs, but I couldn't seem to boot into 32-bit or 64-bit Linux, either my installed version or from a couple Xubuntu live CDs.
I was beginning to suspect some sort of weird flash corruption that was preventing me from using graphics card features, or possibly from switching to protected mode or x64 mode. (I don't know how memtest86+ works, as far as accessing all 8GB of my RAM. I'm pretty sure the BIOS is still in Real mode when it runs its initial sweep, but maybe it's bouncing back and forth between Real and Protected during POST.)
The thought of another outlay to continue having a nice computer at home was not appealing.
Finally, a few minutes ago, I realized something. I had two relatively new pieces of hardware attached to the computer: An APC UPS and a powered USB hub. I disconnected the USB connection between the UPS and the computer, and disconnected the hub, and rebooted the computer. The 32-bit Xubuntu live CD came right up. Huh. Reboot, throw in the 64-bit Xubuntu live CD, and *that* came right up. Huh.
I haven't tried booting off my hard disk yet, and I think that'll require some grub command-line magic to deal with device reordering stemming from the BIOS upgrade and CMOS changes. However, I think it's ultimately workable.
What I think happened is that the connection of the UPS and the powered hub to the computer via USB led to a ground loop that was messing with the internals of the USB controller on the motherboard. See, the powered hub isn't plugged into the UPS; it's a fair bit away from the computer, and I'll have to run an extension cord to get the UPS's power to it. As long as the operating system didn't try to do too much with that USB controller, things worked fine. That meant I could get into BIOS and tweak things, and it meant I could get into grub and memtest without too much trouble. Well, sortof. Remember how, before the flash upgrade, the system would hang at a random point between POST and loading the bootloader. I think the flash upgrade may have changed part of how it dealt with the USB controller, with the newer version inadvertently working around some of the ground-loop-induced weirdness.
So, yeah. A little more experience for those weird situations.
The flash upgrade had gotten me to the point where I could at least load boot sectors again, and I was able to run memtest off of live CDs, but I couldn't seem to boot into 32-bit or 64-bit Linux, either my installed version or from a couple Xubuntu live CDs.
I was beginning to suspect some sort of weird flash corruption that was preventing me from using graphics card features, or possibly from switching to protected mode or x64 mode. (I don't know how memtest86+ works, as far as accessing all 8GB of my RAM. I'm pretty sure the BIOS is still in Real mode when it runs its initial sweep, but maybe it's bouncing back and forth between Real and Protected during POST.)
The thought of another outlay to continue having a nice computer at home was not appealing.
Finally, a few minutes ago, I realized something. I had two relatively new pieces of hardware attached to the computer: An APC UPS and a powered USB hub. I disconnected the USB connection between the UPS and the computer, and disconnected the hub, and rebooted the computer. The 32-bit Xubuntu live CD came right up. Huh. Reboot, throw in the 64-bit Xubuntu live CD, and *that* came right up. Huh.
I haven't tried booting off my hard disk yet, and I think that'll require some grub command-line magic to deal with device reordering stemming from the BIOS upgrade and CMOS changes. However, I think it's ultimately workable.
What I think happened is that the connection of the UPS and the powered hub to the computer via USB led to a ground loop that was messing with the internals of the USB controller on the motherboard. See, the powered hub isn't plugged into the UPS; it's a fair bit away from the computer, and I'll have to run an extension cord to get the UPS's power to it. As long as the operating system didn't try to do too much with that USB controller, things worked fine. That meant I could get into BIOS and tweak things, and it meant I could get into grub and memtest without too much trouble. Well, sortof. Remember how, before the flash upgrade, the system would hang at a random point between POST and loading the bootloader. I think the flash upgrade may have changed part of how it dealt with the USB controller, with the newer version inadvertently working around some of the ground-loop-induced weirdness.
So, yeah. A little more experience for those weird situations.
Depth perception and optics
It's often an assumption that two or more vision sources are required for depth perception. That's not strictly true; on one hand, there's simple experience that can teach a monocular person to recognize distances.
On the other hand, there's this idea I had...
*sidetrack*
It stemmed from thinking about image stacking in HDR photos. Image stacking is simply taking several photos of the same object, where the only change between them is a known, measured characteristic of the photograph recording. Typically, in HDR, it's your exposure adjustment, or how much light you demand your camera collect before saving off the frame. Since your camera's sensor can only precisely measure a finite range of light levels, taking several frames where you move that range around allows you to increase the amount of detail you've captured in bright and dark areas.
However, image stacking needn't *only* be done with exposure adjustments. The concept of stacking applies for any variable you can measurably control while observing a scene, and it just happens that exposure adjustment is the most immediately useful setting to vary.
*end sidetrack*
*second sidetrack*
And now a brief bit about aperture width and depth-of-field. A camera's aperture is the hole that allows light to pass through and land on its sensor. It's exactly analogous to your pupil. The larger your pupil, the more light passes through and lands on your retina. The smaller your pupil, the less light lands on your retina.
One weird side effect of optics and aperture size, though, has to do with focusing. With a narrow aperture (such as when you're outside on a bright, snow-covered day), your depth-of-field is very large, meaning that you can see near objects in focus just as well as you can see distant objects, with no additional effort on the part of your lens. (either that of your eye or of your camera) On the other hand, when the aperture is very wide (Indoors, lights off, etc.), the depth of field is very narrow. That means that you (or your camera) need to adjust the configuration of your lens in order to focus on near or distant objects, regardless of whether you have one or two eyes.
All this boils down to one interesting fact: You can know where your field of focus is if you know your aperture size and your lens configuration.
*end second sidetrack*
What all this means is that you could take several images, each with known aperture and lens characteristics, and learn how far away objects in the scene are simply by observing how in-focus each area of your scene for each known field of focus.
So, yes, you could have calculable measured distances to objects in your scene simply by stacking images of that scene where you know the focus range of each of those snaps.
Honestly, though, I think this applies more to machine vision than human vision. It's also probably most useful right now for telescopes.
On the other hand, there's this idea I had...
*sidetrack*
It stemmed from thinking about image stacking in HDR photos. Image stacking is simply taking several photos of the same object, where the only change between them is a known, measured characteristic of the photograph recording. Typically, in HDR, it's your exposure adjustment, or how much light you demand your camera collect before saving off the frame. Since your camera's sensor can only precisely measure a finite range of light levels, taking several frames where you move that range around allows you to increase the amount of detail you've captured in bright and dark areas.
However, image stacking needn't *only* be done with exposure adjustments. The concept of stacking applies for any variable you can measurably control while observing a scene, and it just happens that exposure adjustment is the most immediately useful setting to vary.
*end sidetrack*
*second sidetrack*
And now a brief bit about aperture width and depth-of-field. A camera's aperture is the hole that allows light to pass through and land on its sensor. It's exactly analogous to your pupil. The larger your pupil, the more light passes through and lands on your retina. The smaller your pupil, the less light lands on your retina.
One weird side effect of optics and aperture size, though, has to do with focusing. With a narrow aperture (such as when you're outside on a bright, snow-covered day), your depth-of-field is very large, meaning that you can see near objects in focus just as well as you can see distant objects, with no additional effort on the part of your lens. (either that of your eye or of your camera) On the other hand, when the aperture is very wide (Indoors, lights off, etc.), the depth of field is very narrow. That means that you (or your camera) need to adjust the configuration of your lens in order to focus on near or distant objects, regardless of whether you have one or two eyes.
All this boils down to one interesting fact: You can know where your field of focus is if you know your aperture size and your lens configuration.
*end second sidetrack*
What all this means is that you could take several images, each with known aperture and lens characteristics, and learn how far away objects in the scene are simply by observing how in-focus each area of your scene for each known field of focus.
So, yes, you could have calculable measured distances to objects in your scene simply by stacking images of that scene where you know the focus range of each of those snaps.
Honestly, though, I think this applies more to machine vision than human vision. It's also probably most useful right now for telescopes.
A quick note and idea on fuel efficiency...
...because it's been churning in my head since last Tuesday, and I haven't written it out yet.
When I was driving to South Carolina and back, I discovered I was going 80mph without pushing the car's engine much at all. (The amount of work I was asking the engine to do might have kept me going 55 or 60mph, normally. Maybe 65mph on the average downgrade on I-77.) It took me a while, but I finally came up with a plausible reason this happened.
I had merged onto the freeway, found myself embedded in a large pack of big-rigs semis, hadn't settled into traffic enough to set cruise, and had to move to get out of the way of traffic shifts to let another batch of traffic merge in. I signaled, changed lanes to the left and accelerated--and felt my jaw drop when I saw I was effortlessly going 80mph.
What I think happened was that the sparse pack of semis (and filler of smaller vehicles) spanning three lanes had caused the volume of air covering I-77 to be moving roughly uniform in line with traffic, not too far from the speed of traffic. In effect, the traffic had created its own wind tunnel, and you didn't have to be dangerously close to the back-end of a semi to get a drafting effect.
So what would happen if you took a long stretch of road (such as an under-river or through-mountain tunnel) and set up blowers pushing air in in the direction of traffic flow? Given the reduction in air drag, how much would you save on fuel economy? Would the energy involved in maintaining a 15mph tail wind in dense-traffic areas be greater than the aggregate saved energy in vehicle fuel? How much of an impact on local pollution would it have? By pulling energy from the electrical grid rather than car engines, you can move the atmospheric cost of energy generation away from areas where it could cause health problems. (Places with high pollution due to vehicle emissions might find that useful...)
Anyway, it was an interesting experience, and leads to a lot of interesting questions.
When I was driving to South Carolina and back, I discovered I was going 80mph without pushing the car's engine much at all. (The amount of work I was asking the engine to do might have kept me going 55 or 60mph, normally. Maybe 65mph on the average downgrade on I-77.) It took me a while, but I finally came up with a plausible reason this happened.
I had merged onto the freeway, found myself embedded in a large pack of big-rigs semis, hadn't settled into traffic enough to set cruise, and had to move to get out of the way of traffic shifts to let another batch of traffic merge in. I signaled, changed lanes to the left and accelerated--and felt my jaw drop when I saw I was effortlessly going 80mph.
What I think happened was that the sparse pack of semis (and filler of smaller vehicles) spanning three lanes had caused the volume of air covering I-77 to be moving roughly uniform in line with traffic, not too far from the speed of traffic. In effect, the traffic had created its own wind tunnel, and you didn't have to be dangerously close to the back-end of a semi to get a drafting effect.
So what would happen if you took a long stretch of road (such as an under-river or through-mountain tunnel) and set up blowers pushing air in in the direction of traffic flow? Given the reduction in air drag, how much would you save on fuel economy? Would the energy involved in maintaining a 15mph tail wind in dense-traffic areas be greater than the aggregate saved energy in vehicle fuel? How much of an impact on local pollution would it have? By pulling energy from the electrical grid rather than car engines, you can move the atmospheric cost of energy generation away from areas where it could cause health problems. (Places with high pollution due to vehicle emissions might find that useful...)
Anyway, it was an interesting experience, and leads to a lot of interesting questions.
Wednesday, March 3, 2010
I finally realized why shutter glasses make sense.
A shutter glasses system allows the video source to provide two video channels, one for each eye. That lets you have stereovision, so you get depth perception for whatever you're watching.
So do polarized light systems like IMAX's linearly-polarized glasses, and Real3D's circularly-polarized glasses.
So why do shutter glasses make sense? There's nothing about the concept that limits you to two video channels. You could have four, or eight, or even sixteen, once the technology's chronometric precision increases enough. It's time-division multiplexing for the visual field.
That's going to have major, major implications for video games. No more splitscreen forcing you to have only half or a quarter of your normal visual field; you and your three buddies can each have the full frame to themselves.
It might even give rise to a new form of cinema, one where both the protagonist's and antagonist's stories are told at the same time, but the viewer has to choose which one they're watching.
So do polarized light systems like IMAX's linearly-polarized glasses, and Real3D's circularly-polarized glasses.
So why do shutter glasses make sense? There's nothing about the concept that limits you to two video channels. You could have four, or eight, or even sixteen, once the technology's chronometric precision increases enough. It's time-division multiplexing for the visual field.
That's going to have major, major implications for video games. No more splitscreen forcing you to have only half or a quarter of your normal visual field; you and your three buddies can each have the full frame to themselves.
It might even give rise to a new form of cinema, one where both the protagonist's and antagonist's stories are told at the same time, but the viewer has to choose which one they're watching.
Monday, March 1, 2010
Software vs hardware RAID
There are three common types of RAID implementations: "hardware"(high-end RAID cards), BIOS-level software(low-end RAID cards) and system daemon-level software(such as the md daemon on Linux).
The RAID support that comes built-in with your motherboard is *probably* software RAID implemented in your system BIOS. Most small RAID cards have their functionality implemented in on-card BIOS that gets loaded when the system boots. As with system-software RAID setups like md, these RAID setups consume your system CPU and RAM as they churn to perform the calculations associated with your RAID configuration, including things like ECC and parity calculations.
All of this is best-known as "software" RAID.
The funky thing? "Hardware" RAID is also done in software, albeit software that executes on a dedicated processor on your RAID card. (As opposed to your system CPU.) You upgrade this software whenever you upgrade the firmware on the card.
So here's what I'd like to see...An Open Source hardware RAID card. All it would amount to is a multi-port SATA controller connected to an onboard CPU, passing data back to the host operating system via the PCIe bus.
The firmware would be built by, well, I dunno. The folks who like md, but don't like running it on their core system. The folks who want to play around with experimental RAID configurations and ideas. The folks who want to try putting LVM into "hardware." There are plenty of possibilities.
Put a fallback firmware set on the card, in case of flash upgrade failure, to avoid bricking the thing.
I'd hit that.
The RAID support that comes built-in with your motherboard is *probably* software RAID implemented in your system BIOS. Most small RAID cards have their functionality implemented in on-card BIOS that gets loaded when the system boots. As with system-software RAID setups like md, these RAID setups consume your system CPU and RAM as they churn to perform the calculations associated with your RAID configuration, including things like ECC and parity calculations.
All of this is best-known as "software" RAID.
The funky thing? "Hardware" RAID is also done in software, albeit software that executes on a dedicated processor on your RAID card. (As opposed to your system CPU.) You upgrade this software whenever you upgrade the firmware on the card.
So here's what I'd like to see...An Open Source hardware RAID card. All it would amount to is a multi-port SATA controller connected to an onboard CPU, passing data back to the host operating system via the PCIe bus.
The firmware would be built by, well, I dunno. The folks who like md, but don't like running it on their core system. The folks who want to play around with experimental RAID configurations and ideas. The folks who want to try putting LVM into "hardware." There are plenty of possibilities.
Put a fallback firmware set on the card, in case of flash upgrade failure, to avoid bricking the thing.
I'd hit that.
Subscribe to:
Posts (Atom)