Category Archives: Technology

Things humans make that weren’t there before.

Forwarding Bank Account

To the category of things I can imagine but not do, add this: A bank account that does not exist, but merely points to one that does.  Analogous to forwarding email addresses that do nothing other than forward to real email addresses, the forwarding bank account would solve the conundrum of what you do when you have all your bills auto-debited from a checking account, and then you decide to switch checking accounts.

The only way that I know to solve this currently involves allocating money in both your legacy and your new bill-paying checking account, and keeping the legacy account open until the last auto-debit is taken out of it.  It’s doable, but it’s tedious, and most companies generate at least one bill that you need to manually pay when you switch accounts on them.  With a forwarding account, you would not switch accounts, as far as your creditors were concerned, so auto-debit would be smooth even when switching your underlying real checking account to another bank.

Can anyone with inside knowledge of how routing numbers and account numbers are actually used in the auto-debit process comment?  Is this feasible, as the system currently works, or would the notion of a forwarding account need to be built in from the ground up?

Powering Off Computers Can Waste Money

Sometimes, the policy-makers in businesses don’t think through the full consequences of their policies.  A recent tweet reminded me of one instance that appears to be fairly common, namely the company policy of powering off computers at the end of the day.  If you’re a developer who has to spend time every day recovering from the consequences of this policy, here are some calculations that might allow you to explain it to your company’s management in terms that they understand: money.

For many job types, powering off computers at the end of the day may be appropriate, but for developers who may have reference material opened in Firefox, an IDE with multiple open tabs, maybe a time tracking application, maybe an IM client for collaborating with developers outside the company, it costs time for the computer, the OS, and the apps to be in the same state that they were before the computer was powered off.  As developers are generally expensive, this loss of time can be viewed as a loss of money; you’re effectively paying your developers a higher hourly rate by building inefficiencies into their work process.

It’s understandable that the 16 hours between day end and day start are targeted in the interests of saving electricity, and the cost savings, while small as a percentage of any organization’s expenses, are easy to obtain.  But let’s do some calculations to see if it makes sense to turn off a developer’s computer at the end of the day.

Scenario #0

The baseline, Scenario #0, is 100 developers leaving their computers powered on, but idle, 365 days a year.  This costs electricity, but there is no impact to developer efficiency.  Consider the baseline cost as $baseline, where developers spend $0 of their salaries recovering state.  Any effort to save money will be compared with this state.  Most nontechnical people would assume that any possible solution involving powering off unused equipment would save money, which is why so many nontechnical people advocate the “power off” policy.

Scenario #1

Scenario #1 will test the premise that powering off computers at the end of the work day saves money.  I assume 100 developers shutting down their computers at the end of their work days.  This costs less electricity than the baseline, but there is an impact to developer efficiency.  Developers are assumed to work 250 days a year (50 weeks of work; 2 weeks of vacation; no working weekends).  (I know this will amuse some developers, but the potential cost savings of powering down a computer are directly proportional to the number of days a developer works; if anything, I’m giving the idea a better chance.)  250 work days a year means that computers are unused for 16 hours a day during those 250 days and 24 hours a day during the remaining 115 days, for a total of 6760 hours a year.

The worst-case scenario I could find for average commercial utility cost was $0.1949 per kWhr, and that’s in Hawaii.  So let’s use $0.20 per kWhr as the electricity cost.  In order to know how much money it costs to keep a computer idle and powered on when it’s not used, we need to know how much power a computer draws when idle.  My 3-year-old Core2 Duo laptop draws between 34W and 40W when idle.  Let’s think the worst and imagine a computer that draws 100W when idle, more than doubling my real-world example.  Such a computer would burn 676,000 Watt-hours (or 676 kWhrs) per year during its unused time, costing $135.20 per year for the portion of time in which it’s unused.  If it were powered off during that time, the total savings over 100 developers would be $13,520 per year.  Electricity cost in Scenario #1 would be $baseline – $13,520.

Let’s now consider the time involved.  The best-case scenario for a useful developer may be ~$30,000 per year, though higher is surely more likely (especially in Hawaii).  Let’s also figure that it takes 5 minutes, at best, to restore a powered-off computer to the state it was in before it was powered off.  During our developer’s 250 annual work days, they spend 5 minutes recovering system and software state.  This works out to ~20.83 hours a year.  At $~14.42 per hour, a business could consider that, while they’re not paying a developer anything extra, $300.41 per year of their salary goes toward state recovery.  In a company with 100 identical developers, this means that $30,041 per year is spent paying those 100 developers just to recover from their computers being powered off.  Comparing with the baseline cost, this scenario costs $baseline – $13,520 + $30,041, or $16,521 more than the baseline of leaving the computers on all the time.

The technical among you are already shouting at your screen.  What about suspend/hibernate options?  You can suspend a computer to RAM, meaning you put it in a state where it sips only enough power to listen to a wakeup call while keeping its state alive.  Scenario #2 considers what would happen if computers were put in a low-power state with fast recovery time, and Scenario #3 examines a no-power state with slower recovery time.

Scenario #2

My laptop suspends within 6 seconds, though the time cost does not factor in because I can just close the lid and walk away.  While suspended, it sips 1W of power.  It takes another 6 seconds to wake back up to its pre-suspend state.  Assuming more worst-case scenario math, let’s say that it really takes 12 seconds to recover and sips 2W of power while suspended.  If we re-use the other numbers above, the 100 computers in our fictional company would consume $270.40 in electricity per year when suspended making the electricity cost $baseline –  $13,520 + $270.40, or $13,249.60 less than baseline cost.  Likewise, our fictional company would pay their 100 developers $1200 to wait while their computer recovered state.  This would make scenario #2 cost $12,049.60 less than $baseline.

Scenario #3

My laptop takes 50 seconds to hibernate, though it’s not necessary to wait around for it to finish.  During hibernation, it uses exactly 0W of power to sustain state.  It takes 110 seconds to wake back up to its pre-suspended state.  Let’s make this worse and assume it really takes 5 minutes to recover.  (I won’t increase the power consumption because there never is any in this state; it’s always 0W unless you’re using a laptop and the battery is charging.)  Re-using the other numbers, the 100 computers in our fictional company would consume $0 in electricity per year when hibernating.  Our 100 developers would spend $30,041 of their time to wait for their computers to recover state.  Comparing the cost vs. the baseline, we have an electricity cost of $baseline – $13,520, and we have a developer cost of $30,041, which looks suspiciously like scenario #1 because 5 minutes of waiting for the computer to restore its state is equivalent to 5 minutes of developer time manually restoring its state.

I have simplified somewhat by not taking into account the time a powered-off computer takes to boot, either from a basic shutdown or from a state of hibernation, but if you extend my calculations to include that time, you’ll find that it merely amplifies my results or breaks ties in favor of state preservation.  Likewise, using more realistic values for developer cost and for power consumption amplify the results.

Final Thoughts

It is never a good idea to have developers lose state.  The best-case scenario is to suspend the state to RAM, and if your computers can’t do that, you need to make it happen.

Even with the highest electricity prices and unrealistically low developer salaries, the idea of abandoning computer state at the end of the day to save money does not work.  However, even the best scenarios for saving money, from any angle, are insignificant when considered against a company’s total income and expenses.  So why not just set a policy that improves workflow?

Costs, revisited

Scenario #0: $baseline

All computers left on 365 days a year, 24 hours a day

Developers spend no time saving/restoring state

Scenario #1: $baseline +  $16,521

All computers powered off 16 hours a day during workdays and 24 hours a day during weekends and vacation

Developers spend 5 minutes a day restoring state manually

Scenario #2: $baseline – $12,049.60

All computers suspend state to RAM for 16 hours a day during workdays and 24 hours a day during weekends and vacation

Developers spend 12 seconds a day waiting for state to restore automatically

Scenario #3: $baseline +  $16,521

All computers hibernate, saving state to disk, for 16 hours a day during workdays and 24 hours a day during weekends and vacation

Developers spend 5 minutes a day waiting for the computer to restore state automatically

It’s Time to Make the Switch

I have been using Linux primarily since 2000 and exclusively since 2003.  In that time, I have found Linux to be a robust and well-supported OS, as long as you stick to compatible hardware.  Windows, on the other hand, was always suffering from self-inflicted wounds or fundamental vulnerabilities.  I was so much more productive in Linux than I was in Windows that it was really a no-brainer.  However, I came to a realization recently: Windows’ many flaws drive the economy.

If it weren’t for Windows, we wouldn’t have so many anti-malware companies vying for your purchase, we wouldn’t have Geek Squad, and large companies wouldn’t have entire divisions dedicated to just keeping Windows from falling apart.  In short, Windows is where the money is, because Windows always needs TLC.  The same thing is reflected in technology shows and podcasts – the vast majority of questions are about Windows, because Linux users mostly don’t have problems, and the ones they have can be self-diagnosed and fixed.  If I were to continue on my path of using Linux and converting others to Linux, I would be chipping away at the enormous Windows service aftermarket, and people would lose their jobs.  I don’t want to be a part of that.

So, as of today, I am switching to Windows so that I can learn how to fix its inherent flaws and get a slice of that Windows support pie.  I’ll be careful to fix them temporarily, since the real money is in not addressing the underlying problems.  After all, if fixing the cause were the goal, people would just have switched to better OSes long ago.

Babies as Destroyers of Technology

I have heard numerous tales of babies intentionally or inadvertently destroying all kinds of technology, from cell phones, to DVD players, to televisions, to speakers, to headphones, and so on.  I know that a few of you have technology and kids.  What advice can you give me that will let me protect both my imminent daughter and my various tech?

The World’s Dumbest Energy-Saving Tip

We have a large toaster oven with several rack height settings.  I used to thoroughly re-heat things like pizza in 15 minutes on the “bakery” setting, with the rack in the middle.  On a lark, I raised the rack height and discovered that I could do the same job in 7 minutes, closer to the top heating element (but further from the bottom).  The simple step of moving the rack to be closer to one of the heat sources has saved me more than 50% on my toaster oven electricity usage.

This surely ranks up there with advice such as “put a sweater on to save on your heating bill” and “live in the basement in summer to avoid cooling the house.”

Conferring Immortality?

Preface

Authors, philosophers, and scientists have explored these concepts in detail, and it’s my hope to introduce them to you, not to claim them as my own.

Feasible Immortality

There is a recent notion that some people alive today may achieve biological or technological immortality.  Biological immortality focuses on things like prevention and repair of cellular damage, immortality of cell lines, replacing aged organs with newly-grown ones, and removal of what I can best call “crud” that accumulates in your body as you age.  Technological immortality involves replacing failing biological components with artificial ones, replication of your personality, in essence porting it from wetware to hardware or software.  We do much of this now under the umbrella of general medicine, achieving longer and healthier lives.

What I want to explore is the philosophical question of whether or not immortality can be conferred on an individual, or if the conversion to immortality conceptually kills the original and produces an immortal copy.

Biological Immortality

While our structure remains relatively constant over time, the atoms that comprise us change regularly as cells replace themselves.  On average, you are made from completely different atoms every seven years.  If you define yourself based on the atoms that are in your body, or specifically your brain, then you’ve already been replaced Y/7 times if you’re Y years old.  Any memory you have from eight years ago was stored in cells that have died and left copies.  You can still access those old memories because cell replacement doesn’t change structure, assuming everything goes well.

If medicine did allow us to extend our cell lines indefinitely, avoid replication errors, fix or prevent inherited genetic diseases, and remove accumulated “crud,” we could live forever if we avoid the same things that can kill us prematurely now.  We’d still be ourselves – at least as much as we are ourselves now, with our atoms changing every seven years.

Technological Immortality

An immortality solution that emphasizes technology could eventually replace each part of your biological body with a piece of technology.  To jump ahead slightly, let’s assume every organ, including the brain, can be replaced with a modular piece of technology that performs like the original.

What if your biological brain were put in a technological body?  Simply doing this would significantly extend life, since a lot of things that damage the brain are caused by the rest of the body.  In conjunction with biological brain immortality, it could confer full immortality.

Would your real brain in a technological body still be you?  If you think of our brains as piloting our bodies anyway, then you’d probably say yes.  To me, this seems reasonable.

What if, instead of using your biological brain, some process scans your biological brain and produces an exact copy of it in hardware and/or software?  Does that confer immortality to you, or does it merely copy you?  What if the scanning process is destructive, resulting in some time when the pattern in your brain that is you isn’t complete?  Would that murder you and create a copy?  To me, I’d consider it murder for a scanner to tear my brain to bits as it studied the structure.

Those are the edge cases.  It gets trickier when you think of the middle ground.

What if there were a way to slowly replace the biological structures in your brain with technological ones?  Your brain already replaces cells on its own, and we don’t generally think of this as killing us, so how is it different for a technological process to take over?  One by one, neurons, axons, dendrites, would be replaced by functionally-equivalent nanotechnological components.  If we set the timescale for the conversion to be seven years, then we ensure that it happens no faster than nature.  Are you still you when one of your neurons is technological?  What about 100?  What about 10 billion (roughly 10%)?  One thing is assured: by the end of the seven-year process, your biological brain will be gone.

Wait, it gets creepier.

Let’s say you’ve made the switch to a nanotech brain, as above.  Assuming you still think you’re you, and not the murderer of your identical twin, what would happen if you converted your nanotech hardware brain into a software brain?  Is there a difference between two things if they behave the same?  As before, the conversion could be done gradually, over a seven-year period, slowly deactivating pieces of the hardware brain and activating equivalent software representations in a computer brain.  Would you still be you?  If the answer is yes, then what exactly are you now?  Are you the structure that the software represents?

It gets stranger, too.

One of the authors I’ve read suggests that if you’re ok with neurons being replaced by functional equivalents, then the physical geometry of these neurons in relation to the others doesn’t matter much.  Instead of direct physical connections, why not go wireless?  You could store parts of your brain in your house and keep just a small portion of it in your body for tasks like reflexes where latency is an issue.

Closing Thoughts

To me, the edge cases seem clear: I’ve had biological immortality conferred upon me if my brain is able to maintain its normal biological process indefinitely;  I’ve had an immortal copy made if someone reads and copies my brain in a single step, and I’m murdered if the read was destructive.  What bothers me is that I don’t know how I feel about something that slowly destructively copies my brain in place.  The continuity seems to be what throws me, because it would offer the same continuity as biological brain immortality, but the end result is that my biological brain will have slowly been destroyed.  Of course, if continuity problems bother me, I should stop sleeping.

Free Virtualization in Linux

There are lots of ways to create virtual machines in Linux, and I heard about some of them at CPOSC.  In the course of helping someone learn ways to run Windows XP virtualized inside Linux, I found out about Virtualbox OSE.  It’s fairly easy to install and set up and start using.  Right now, I’m installing Fedora 9 on a virtual machine, just to see what the state of affairs is.

I know that lots of you out there use virtualization at home and at work, so I’d be curious to hear how Virtualbox OSE compares with the commercial and free solutions that you use.

UPDATE: Someone asked me on IRC if there was a server mode.  It does appear that there is.

Strange Phone Damage

A few days ago, Sun called me.  I didn’t have the Jawbone paired, so I just answered on the RAZR V3 itself.  The first thing she did was sneeze, right into the receiver of her work phone, after which her voice took on a buzzing quality.  From the sound, it was clear that the phone’s ear speaker had done something strange.  Fearing the worst, I tried rebooting it to make sure it wasn’t some temporary condition, but there was no fix to be had.  Just to be very clear:

My wife’s sneeze broke my phone’s speaker.

The phone is about 3 years old, so I’m happy to write it off as the inevitable failure of temporary technology, but “you would think” that a phone shouldn’t be allowed to send a sound loud enough to blow out its own speaker.

Oh well.  I can still use it with the headset, and I was waiting for an excuse to get a new phone anyway.

Wiki-style Office Organization

Assume we could easily model a 3d space, like an office, and account for every object in it.  Then assume that a GUI exists with which people could move things around – tables, chairs, computers, monitors, books, etc. and update the layout.  Anyone with chronic disorganization problems in their office could let the so-called wisdom of the crowd help organize the office.  Then it’d be up to the user of the space to make the changes in the real world.

We already do this now for documentation (such as wikipedia) so why not physical spaces?