Monday, November 29, 2004

The Technology Squeeze: Truck Drivers

DARPA’s Grand Challenge gets more press than it deserves. It’s an entertaining competition, and there really is some great engineering work being done, but like most of DARPA’s projects, the Grand Challenge is about planting thousands of seeds in the hopes that something will turn out to be a useful technology in the long term. A short-term incubator it is not, but the competition was designed to meet Congress’s goal of making 1/3 of its operational combat vehicles unmanned by 2015. Now, 2015 may not sound short-term to some readers, but given the historically slow procurement cycles in the U.S. military, I think this is going to be a very difficult target to reach. 1/3 is a LOT of vehicles.

I think the Grand Challenge is an unrealistic approach to meeting that goal because it is spurring development in systems that allow for completely autonomous driving. The level of artificial intelligence required for this is huge, because it is so difficult for a system of cameras, radars, sonars, laser rangefinders, and their digital processors to distinguish between a bush, a boulder, and a brown paper bag. At least one team in the previous round of competition resorted to instructing the vehicle to not to drive over any bushes at all. Imagine a combat vehicle unwilling to risk driving over a shrubbery!

Not only is complete autonomy for ground vehicles unrealistic for large-scale deployment in the near term, but pushing for unmanned combat vehicles may not be the best way of meeting Congress’s ultimate purpose behind the goal, which was presumably to save lives. Recent counterinsurgency experience in Iraq–the kind of warfare our military expects to be the norm in coming decades—suggests that non-combat vehicles are the more dangerous place to be. Every guerilla knows that the supply chain is the soft underbelly of any large military. Why go after M1 tanks when unarmed, unarmored, and often unescorted semi-trucks provide ample opportunity to increase the body count on the evening news and reduce the ability of the U.S. Army to extend its reach into insurgent strongholds? Now that the Army is scrambling to better armor and escort its truckers, perhaps it’s time to consider alternatives to having a human driver in every cab; alternatives workable in the short term. Semi-autonomous alternatives.

Way back in 1995, a robotic driving system called RALPH drove a vehicle from Washington D.C. to San Diego, CA in control 98.2 percent of the time at an average speed of over 62 mph. Efficiency dictates that just about anywhere robots could work successfully 98% of the time, they should be allowed to do 98% of the work. As I've pointed out in my previous posts on robotics in the workplace, this is made possible by keeping a human in the loop. In RALPH’s case, the human in the loop was the system's designer, a doctoral candidate in the driver’s seat by the name of Todd Jochem (he subsequently earned his Ph.D.)

Obviously it wouldn’t do much good to give Army truckers autopilots that needed the human in the loop to be sitting just inches away. But what if truck convoys were actually chains of semi-random X’s and O’s, like XOOXOOOXOXOOXXOOO, where every ‘X’ is a human-driven vehicle and every ‘O’ is a robot-driven vehicle that mostly just shadows the vehicle in front of it as though following a short trail of breadcrumbs? This would get around the bush/boulder/bag problem because the robot trucks would only drive over the tracks of the human trucks. It would retain the intelligence of human drivers while easily eliminating a third of them. Just tint the windows of all the trucks, and stick dummies in all of the unmanned ones so that the guerillas can’t easily tell which trucks have fleshy wetware on board.

But what about that lead vehicle? It might as well have a bulls-eye painted on it. It’s not the place the human would want to be, yet it’s also clearly the place a human needs to be. Would it be so hard to equip it with the kinds of high resolution cameras and all-weather imaging systems used by the better-funded Grand Challenge teams? The remote driver could remain somewhere in the convoy, where he would be close enough to react to any environmental concerns unrelated to the road ahead. This would also make it unnecessary to clog communications satellites with high-bandwidth imagery the way remotely piloted surveillance aircraft do.

But driving the convoys from the comfort of a base in Tennessee would definitely have some advantages. Besides getting humans out of harms way, it reduces the logistical needs abroad, eliminating the need for some of those trucks entirely. Remote truckers don’t need their food, water, and shelter transported to the other side of the globe.

When semi-autonomous driving and remote driving catch on with the military, you can bet that civilian freight lines won’t be far behind. There will be some political hurdles, of course, which should not be understated; people won’t like the idea of potentially buggy computers driving a tractor trailer next to them at 75 mph. But the vision I have in mind doesn’t seem too far fetched.

In the near term, semi-autonomous trucks would be confined to the highways along major long-haul routes—the kinds of places where it’s not unusual even today to see two or three trucks from the same freight line keeping an inline formation. Local drivers would move long haul freight to and from waystations just off the highway where the long-haul drivers would pick up two or three trucks at a time that were going his way. These would be dropped off at the waystation nearest their destination, or at a waystation acting as a hub to other waystations. Distribution centers for major retailers like Wal-Mart are already located just off highways, often with dedicated off-ramps outside of city limits, so the biggest waystation hubs are already in place.

The robotic trucks in a convoy would be designated as such with a standard pattern of lights, reflectors, and symbol placards to help other drivers realize that these vehicles may not be as forgiving in some situations. They would also be equipped with a 360 degree array of the kinds of radar or laser rangefinders found in today's “smart” cruise control systems, providing the human convoy pilots with a clear picture of the traffic around them. At the human’s direction, or on their own, if need be, convoy trucks could be temporarily routed to different lanes, or allow non-convoy vehicles to come between them.

In the longer term, the humans in the loop would become more like ground-traffic controllers, remotely overseeing many autonomous trucks in many areas simultaneously. These operators would spend most of their time helping trucks through stretches of high traffic or ambiguous road conditions, and many would probably specialize in a few tricky locales. As a truck enters a difficult area or legally mandated safe corridor, command is routed to the available operator best equipped for it.

Ultimately, semi-autonomous and fully autonomous drivers will be safer than a fleet of human drivers, both in peace and in war. Trucks without drivers may become less tempting targets for guerillas looking for media attention, saving the lives of both drivers and bystanders. Long haul truckers will be less likely to be hypnotized by the road or fall asleep on monotonous, uneventful stretches because it is here that they will be coordinating multiple vehicles at once--and remote drivers acting collectively would never have any incentive to drive when sleepy; there would always be someone else close by who could do it. And, finally, the logical conclusion of DARPA-style projects will be artificial driver-savants that, while subhuman by many measures of intelligence, are better drivers than humans could ever hope to be.

I just don’t see them making it to mass production in time to meet that 2015 goal.

Immortality Institute Chat

I was the featured guest speaker tonight for the Immortality Institute's Sunday Chat. You can find the transcript here.

In an unmoderated chat format like this, it's hard to decide which questions and comments to respond to. I hope I didn't offend anyone as a result of my on-the-spot decisions.

In particular, I regret being unable to respond to a question about other types of fallacies and failures I think futurists can fall into. I actually have a list of twelve in my notes. In truth, all futurists who make useful predictions are ultimately wrong, so one or more of these failures gets everyone in the end. But it can be instructive to ask which failures are most likely for today's futurists.

Look for an upcoming essay summarizing the conclusions I've reached while preparing for this chat. It, or a link to it, will appear on this blog.

Saturday, November 27, 2004

The Reading Cure

(Note: Since I don’t have a professional-level education in psychology, I’m probably about to overstep my bounds here. Oh well. I’m claiming blogger’s license. But I did take a little Psych in college, and have read a few books and articles on the subject, so I don’t think I’m completely uninformed.)

Psychology is in the midst of a fragmented, long overdue civil war. Battle lines have been drawn in efforts to define the very nature of the field.

In one camp, there are established professionals versed in the traditional talking cures descended from Freud, the behavioral therapies inspired by Skinner, and the innumerable variations espoused by others who continue to create esoteric models to explain how the mind works.

In another camp, a mostly younger breed of scientifically minded therapists are trying to incorporate standards of scientific evidence in an effort to implement whatever actually works; Credible studies have suggested that receiving traditional talk therapy from a professional psychologist is no more effective than talking with a sympathetic listener.

A related and overlapping camp of secessionists questions the very relevance of talk therapy in the age of antidepressants and other provably effective drugs that work directly on the brain, and in light of advanced brain scanning increasingly offering the chance of actually spotting and surgically correcting the physical abnormalities behind some of the rarer but more serious psychiatric problems.

Ultmately, in the coming age of nanotechnology and artificial intelligence, I think the human brain will be so well understood that there will be little need for “black box” hypotheses that only occasionally result in effective treatments—treatments for which the causal mechanism is often unknown. But until that time, I submit that an alternative, memetic approach lending itself to scientific comparison may be the sanest way to treat many of the softer issues addressed by counseling and therapy.

I define “soft” issues as those that don’t justify or lend themselves to today’s drugs, and only warrant therapy because the patient believes that they do. A person who considers himself to be intolerably shy, for example, may seek counseling, even if those around him think he’s just a little withdrawn. Someone deeply disaffected with a marriage, yet equally determined to fix it, may seek counseling as well, even if divorce would be a socially acceptable solution. In such cases, the patient is considered successfully treated when the patient declares that it is so. There is really no alternative to this subjective victory condition in soft therapy, except in cases where close associates of the patient deserve to be polled as well (A family with an abusive parent is not served by a patient who considers himself cured because the therapist has convinced him that abuse is an appropriate expression of frustration.)

I think it’s fair to say that no two therapeutic conversations are exactly alike. The therapist, patient, and context will always vary from one occasion to the next. This makes it hard to scientifically study the effectiveness of a counseling approach. About the best we can hope for is a rough comparison between obviously different counseling philosophies and loose “control groups” that are “treated” by any sympathetic listener. Any increase in uniformity among therapy sessions allowing for cleaner studies would therefore be potentially be worth the loss of customization; effective session content would quickly reveal itself as such, creating opportunity for developing still more effective, testable techniques.

It turns out that we already have a ready supply of uniform therapy sessions. They are called books. Not necessarily non-fiction self help books, either. I think every reader of fiction has encountered a story that noticeably changed their outlook or behavior. Soft psychology problems—the ones we’re not yet trying to identify and treat physically—are memetic in nature (composed of ideas and concepts); at least, we are only concerning ourselves with the memetic component if we’re calling it a soft problem. As dense memetic delivery systems, books can alter or overthrow patterns of ideas and values in the mind of a reading patient, literally changing their lives.

The pharmaceutical industry provides a ready model for the techniques of identifying and marketing effective treatments. Drug companies typically spend only a slice of their efforts creating new compounds; The low-hanging fruit is in testing existing compounds found in nature or in drugs used for other purposes, to see if they provide a net benefit in the treatment of a particular condition. Similarly, a memetic drug company would have ample low-hanging fruit in the form of existing literature; there would be little need to develop stories or self-help books from scratch. If anecdotal evidence suggested that “Gone With the Wind” was effective for people who feel hopelessly codependent, the memetic drug company would sponsor studies and trials in an effort to compare it to other potential alternatives. When proven safe and effective (perhaps by the inevitable memetic arm of the FDA), the book could be marketed as a memetic drug.

We will probably never see scientifically minded, federally regulated meme-drug companies like this, however, because of the easily reproducible nature of media. It would be tough for a company to make enough profit to cover the cost of the studies when anyone can check the cure out of a library or download a copy off the internet. But perhaps the combinations of copyright, buzz, and medical authority would offer enough profit potential, in the form of boosted sales for forgotten or declining titles. If not, government money and philanthropy already headed into scientific psychology (I don’t know how much of this there might be) could be directed to such an effort.

But perhaps the only way we’ll ever see anything like this will be if people take it upon themselves to create it, in the form of internet communities where people can rank books they’ve read according to their value in treating their various psychological discontents.

The reading cure would not be for everyone. Not everyone can read at the level necessary to receive the impact of a given book. (Movies and recordings might be effective substitutes.) And some people will always feel most comfortable when talking to someone professionally trained to listen. But, many seek traditional therapy only because they know of no alternative, and needlessly waste their money on whatever marginal treatment their therapist happens to use. The reading cure could be an inexpensive and effective alternative to traditional psychology.

Friday, November 26, 2004

'Robot: Mere Machine to Transcendent Mind' (the title says it all)

No future study is complete without graphs of exponential curves, but in Hans Moravec’s ‘Robot: Mere Machine to Transcendent Mind’ the accelerating curve from mundane to profound is an emergent form identifiable in the structure of the book itself. The book opened on page one is not the same one closed on page 211; readers looking for the hard stuff they associate with Moravec might be confused or turned off if they lack the trust to wait patiently—or the impulsiveness to skip ahead.

Moravec uses the first two chapters (50 pages) to relate the history of today’s semi-autonomous robots—a tale he is uniquely positioned to tell largely in the first person. In direct, semi-technical prose, he talks in big blocky paragraphs about maps, and MIPS, and the general trade-offs designers make today when creating robots suited to particular tasks, whether they be high-speed driving, low-speed roving, or anything in between. Most of this information will be easily forgotten, but by the end of the discussion the reader has at least a chance of relating to the scale of the numbers tossed around in succeeding chapters.

Chapter three is a very different take, looking past the misleading veneer of robot limbs and sensory organs and into the mind of the machine. This is a chapter about the possibility of true artificial intelligence, and the nine most common objections to it – attacks Moravec parries with the serene precision of a judo instructor. I have no doubt that he has had many, many opportunities to hone these arguments in discussions with people in and out of academia.

The midpoint of the book details Moravec’s categorization of robot abilities by “generation”. As computing power climbs the range of applications to which new generations of robots can be applied increases accordingly. Physical dexterity grows as well, as do the frequency of opportunities for disaster—robots making mistakes that injure themselves or others. Robot minds may feel analogues to pain and pleasure as part of their designs, learning from traumatic mistakes and great successes. By the fourth generation, robots are roughly as smart as human beings.

Here, I part company with some of Moravec’s conclusions. His world of tomorrow, filled with human-equivalent robots shaped by superintelligent AI, is neither inexplicable nor unstable. Robots, as agents of corporations in a capitalist environment, seek only to profit within the bounds of laws created by and for humans; the machines evolve to become ever more proficient in this domain. Not all robots will play along, he argues, but cooperation is such a beneficial long-term strategy that an overwhelming majority of minds will block the ill intents of the few bad eggs, who in any case wouldn’t survive into the next round of corporate mergers and breakups. Moravec does not seem to share my concerns about runaway seed AI—a mind miles above all others and becoming more intelligent every second, unstoppably bending the universe to a will that may or may not include our own.

But, perhaps to any true humans remaining in such an era, the expansion of the new minds will indeed look like a fiery nova of cooperative capitalism blossoming out from the inner solar system at the speed of thought. Moravec does suggest that traditional human minds will be hopelessly uncompetitive pensioners in the new economy, confined to heavily metaphorical interpretations of activity increasingly beyond their ken.

The final quarter of ‘Robot’ is definitely not your father’s futurism. Moravec shows a solid understanding of cutting-edge physics, and their implications for the future of thought: compact customized matter; the pros and cons of existing on the surface of a neutron star; the “Bekenstein bound” on computation in a single atom. This last one suggests that a single human body’s worth of atoms “could contain the efficiently encoded biospheres of a thousand galaxies—or a quadrillion individuals each with a quadrillion times the capacity of a human mind.” The future is in simulation.

Or is simulation the very cause of the future? Moravec patiently discusses the physics behind time travel with several thought experiments for sending messages or beings back in time. He also explains the closely related curiosities of quantum computation, where even the non-paradoxes can seem, well, paradoxical. Our universe is already saturated with particles that seem to have no regard for time’s arrow.

‘Robot’s’ final ten pages seem almost as dense as the sum of all that came before. Cosmological, philosophical questions on the nature of reality and consciousness boil over as Moravec uses Everett’s “many worlds interpretation” of quantum wave functions to argue the uselessness of differentiating between simulation and reality. Perhaps, he speculates, the universe only appears to have order because our minds make it so, and that our minds will never ceases to be even after everyone else sees us as long dead. We may simply find ourselves in branches of reality where our continued existence is likeliest.

So ends ‘Robot’, at a place very different from the seemingly inconsequential dawn of robotics described in the beginning. Like the strange future we approach, Moravec’s ‘Robot’ is an accelerating trip into the unexpected.

Wednesday, November 24, 2004

I Want My Smart Receipts

I’m going to make you a small fortune. Incorporate yourself and take this idea to your nearest venture capitalist.

I want all of my receipts to be smart receipts. In other words, I want them all to be machine readable without a scanner and supercomputer running the latest optical character recognition software. There shouldn’t be anything especially difficult about this. Many major chains already print bar codes on the receipts that employees can scan when working at the returns desk. The scanning register queries the corporate database to reopen the sale identified by the code on the receipt, and it is done.

Make it so that I can do the same thing using a low-end webcam or cell phone camera. Such cameras can already read traditional bar codes, so all I need on the receipt is some kind of code for a URL like: http://www.mostromart.com/receipt788849920?pin=76r8987. Make sure the address is printed below the bar code, just as with UPC codes, for times when the code is damaged or I am temporarily without a working reader. Finally, provide a small, free plugin for my favorite spreadsheet or money management program that will seamlessly use this address to enter the data from my receipt directly into my software.

What’s the big deal, you ask?

As it stands now, most receipts just clutter up my desk until it’s obvious I won’t be returning anything with them. Then they are thrown away. When you allow my computer to read my receipts, I will begin to truly take advantage of them. I will rack up the individually small but collectively large tax deductions an attentive filer can claim. I will conscientiously track my spending and spot worrisome trends at the category or product level. (I spend how much a month on Chip Blaster Deluxe Cookies??) I will scan all of those damned bits of crumpled paper as soon as I dredge them, lint-draped, out of my pocket--getting them off my desk, out of my sight, and into manila envelopes with witty inscriptions like “2004: In case of audit, break seal”.

Or maybe I won’t. But if good intentions are enough to keep fitness centers and weight-loss programs in business, they’re good enough for me to make demands of you, the entrepreneur who will sell affordable coding and hosting services to business unable or unwilling to meet my needs on their own.

Not that I expect to be the primary end-user target of smart receipts. I think hotels, copy centers, restaurants, and office supply chains will be the first introduce smart receipt services as another way to entice the well-heeled business person—a customer much too important to fill out expense reports by hand. Next down on their target list will be the increasingly common self-employed soul with every reason to show the IRS just how expensive it is to run a business. And after that? Everyday people looking for the all-important moral high ground, in the form of numbers that can be waved in the face of a spouse about to throw another six-pack or pair of shoes onto the checkout counter.

Yes, there’s going to be some money changing hands, but I’m so generous I won’t even ask for a tiny slice of your profits. I’ll just sell my accumulated receipt codes to the highest bidder. Market research firms will love to get their hands on my receipts once they can get them online, without waiting for scraps of paper with data that must be entered manually. Sometimes I will carelessly drop my receipts in the street, or in public trash cans, where a homeless person will retrieve them for their codes while collecting aluminum cans. Market researchers will need the continuous threat of people like us to leverage reductions in the exorbitant prices retail giants usually charge for this data—when they sell it at all.

You should now understand that smart receipts will be a win-win for everyone; and the biggest winner will be you.

Go now. I will watch for word of your success in the pages of Wired and Slashdot.

Tuesday, November 23, 2004

The Technology Squeeze: Housekeeping/Cleaning Personnel

The other day I discussed trends in technology that are eliminating jobs in the low-wage retail sector—jobs that people and policy makers probably never thought were really at risk. We have been conditioned since the industrial revolution to expect manufacturing workers to be replaced by automation, but low-end service personnel?

In this post I will discuss another occupation, in the next of many expected posts about professionals who haven’t expected to be culled by technology, but should probably start to worry. I use the word ‘culled’ deliberately, as humans can expect to be in the loop in these professions for quite some time, rather than eliminated wholesale; robots are still too dumb. Human-supervised clusters of self-checkout stations are an excellent example of a template we can apply, with some creativity, to many other areas.

Today’s target is housekeeping/cleaning. Hotel chains, airports, and hospitals all have large numbers of similar bedrooms and/or bathrooms that are periodically cleaned by crews of roving service personnel. A cleaning person or team of persons go through one room after another, vacuuming, dusting, swapping out the sheets, and scrubbing the bathtubs, sinks and toilets. I expect these institutions to begin employing smaller teams using robotic assistants. (If I google hard enough, I’ll probably find that this has already begun.)

Vacuuming is a chore that has already proven tractable to robotic servants, as in the case of the Roomba. Roomba operates in a semi-random pattern to accommodate endless room configurations; As a result, it misses spots, and doesn’t work very quickly. But since one hotel room is pretty much like another and must be vacuumed frequently, it would be easy and worthwhile to configure commercial-grade vacuum robots with layout programs—or the rooms they service, with discreet boundary wires or paints—to make coverage quick and thorough. A worker arriving in the room to clean it would check the floor over for situations that might cause the vacuum trouble, then unleash it while changing the sheets and cleaning the bathroom. The vacuum would be finished by the time the worker is ready to move on to the next room.

And what about bathrooms? Cleaning a bathroom is probably the least enjoyed and most difficult task in housekeeping. As a result, it is often not done as often or as well as it could be. This is considered acceptable in our homes, but not in public facilities, and especially not in hospitals. So I look to healthcare to lead the deployment of bathroom-cleaning robots, even before they can pay their own way from a labor standpoint. A toilet-cleaning robot might look like a non-descript box just a little larger than the toilet it cleans. After removing any excessive debris from the target, a worker wheels the machine over the toilet and plugs it into water and power supplies (a hose socket might be built into a specially designed toilet made by the maker of the cleaning machine). The machine sinks down and forms a gentle water-tight seal with the floor, and proceeds to scour the toilet from all sides with scalding hot water jets and/or disinfectants, like a high-powered dishwasher. It blow-dries the toilet when finished, and signals completion to the worker, who was free to perform other tasks during the cleaning. The machine is unplugged and wheeled to the next target.

As for sinks and bathtubs, robotics might be forthcoming here, but would initially work best with fixtures designed for automated cleaning—combination sockets for mounting, power, and water used by service people who always make sure the target is sufficiently prepped for scrubbing before plugging a very utilitarian-looking machine in.

Robo-moppers for the floors? Why not? The key to each of my above scenarios has been force-multiplication of human labor rather than dependence on the intelligence of robots. If eighty percent of the time a robot won’t have any problems, it makes sense to let a robot do eighty percent of the work. If a human will identify and correct the remaining twenty percent, the robots can do it all. The end result: lower costs, cleaner facilities, and workforce reduction among institutional cleaning personnel.

(To repeat an earlier disclaimer: I’m not passing judgment on the employees or decision-makers who will be involved in the transition. I wish only to explain an economic reality I hope we will be prepared for.)

Monday, November 22, 2004

'Profiles of the Future': A Blast From the Past

I’m scheduled to be the Immortality Institute’s guest chat person this coming Sunday, on the topic of futurism in the past and present. This is why my reading list has been so heavy on the future these last few weeks; I’d like to be at my most intelligible best. Finding futurism from the present has been no problem thus far, but futurism from the past has been tough. Even forty or fifty years back would be nice, although I intend to go back much farther before drawing my conclusions, into the depths of that history stuff I'm supposed to know so much about.

With this in mind, I was hoping to get my hands on a copy of the 1962 version of Arthur C. Clarke’s classic 'Profiles of the Future: An Inquiry Into the Limits of the Possible', but the best my library could do was his 1984 revision. Some chapters are revised extensively, and others not at all, but Clarke doesn’t always tell you which ones, so I don’t know how close to the original I got. Most of it doesn’t seem to have changed too much, and holds up well even now, because this book does not try to defend any particular scenario of the future so much as describe the possibilities that exist within our understanding of the laws of physics; the changes in understanding between 1962 and 1984 didn’t close off much, if any, of that territory.

Older writers in a non-fiction format that amounts to a soap box can sometimes be tedious, but I found Clarke to be surprisingly readable and even playful; for example, his recognition that advances in communications and telepresence might greatly diminish the need for passenger vehicles does not keep him from explaining how hovercraft, conveyer systems, sky hooks—even antigravity and extra dimensions—might get us from place to place. (Yes, he admits that those last two methods seem pretty unlikely, but he can’t rule them out, and the point of this book is to cover the interesting things he can’t rule out.) Similarly, he accepts the possibility of superintelligence and uploads, but still talks about human-equivalent machines assisting humans in their daily affairs. I daresay that Profiles of the Future is the most entertaining non-fictional future study I’ve read in years—a blend of rational insight and old-school Sci-Fi fun.

But I wasn’t reading for the entertainment value. I was trying to be more critical. But between winning me over and playing it safe, Clarke didn’t leave me much chance to fault him. As if anticipating my needs, however, he appended his text with a rough, extensively disclaimered timeline – pay dirt. I quickly surmised that his general error class was the one Kurzweil most likes to brings up: short term expectations (20 years or so) are a little too optimistic, while long-term predictions are much too conservative. He correctly expected cloning and super-heavy elements between 2000 and 2010, but he also expected manned planetary landings and non-cryogenic superconductors around this time. Well, I suppose the decade isn’t over yet. As for space elevators and the creation of greater-than-human intelligence: he didn’t see these until 2090-2100, which seems much too distant given the discovery of carbon nanotubes and the contemporary consensus of 2030 or so for greater intelligence.

I may just have to see what interesting things he’s said lately. He’s pushing 90 now, but his name still appears on collections and collaborations from time to time, so maybe he’s not completely out of the game.

Sunday, November 21, 2004

The Technological Double-Squeeze on American Wage Labor

Visiting my local Super Wal-Mart recently, I was treated to an unusual, but not unexpected, sight. A long line of shopping carts was being pushed into the store not by the usual two-man team, but by a single worker with a remote-controlled pushing device that propelled the carts from behind while he guided them from up front. I have worked in positions that entailed cart-pushing myself, and had wondered how the process might be automated. But I didn’t expect to see it so soon, because I knew what I was earning pushing carts. Any robotic replacement would have to be really cheap. Or so I thought.

In retrospect, it should’ve been obvious that there were other cost pressures at play besides my twice-monthly pittance. I had already used self-checkout stations at grocery stores, where one cashier mans (or often doesn’t man) a station of four terminals. Since these aren’t as fast as manned registers I figured that each cluster of four was maybe eliminating one job. It should’ve occurred to me then that a pusher robot for shopping carts would probably cost considerably less than a couple of self-checkout stations. But how on earth do these stores recover the costs of these clever machines when the people they replace get paid so little?

It didn’t take me long to realize I had not been taking into account the hidden expenses added by even the cheapest human employees to an American employer, particularly one not operating under the table. There is the employer portion of the payroll tax, the cost of goods stolen or damaged by employees, worker’s compensation claims and lost wage benefits for injuries on the job, legal fees and settlements when employees sue their employers, hiring and training costs, retention efforts, paycheck distribution, and all of the associated bureaucratic overhead resulting from the above.

But even as technology shrinks some of those expenses, it swells others. The first and probably largest category is medical. As we are so often reminded in this country, the costs of health care are soaring, thanks in large part to an extremely healthy market for corporations that develop costly new diagnostic tools and drugs. The benefits of any one procedure or pill are often very limited, but the spoiled patient will usually get them all. And since practically all American medical care is given by professionals who either have a profit motivation for pulling out all the stops, or who are completely isolated from the costs of technology, we are all spoiled patients (except for the growing number of uninsured, who don’t count in a discussion of workers compensation expenses). I’m not sure that we would even want a saner third alternative that helps reign in costs, because the American attitude towards healthcare is that everyone should get the best care possible no matter what. It’s noble and good, but, it drives up the cost worker’s compensation insurance as much as ten percent a year for many employers.

The second major way in which technology is making even cheap employees more expensive is litigation. Thanks to modern media and the internet, everyone knows about the million dollar settlement claims made by corporations and the diverse ways in which their employers might be screwing them over. And, with just a few short clicks, you can find entire legal firms dedicated to pressuring employers into making settlement claims for your mutual enrichment.

So we find ourselves in a situation where retaining human employees is increasingly costly even though the paychecks of the bottom-feeders remain low. Let’s face it: From the standpoint of an American retailer, human employees are lazy, flighty, shifty, clumsy, and litigious. Combined with the inexorable tendency of high technology to become cheaper over time, we have a technological double-squeeze leading American shoppers to bag their own purchases after checking themselves out at a store where they probably had to compete for the attention of a handful of service personnel running around in response to chimes and lights scattered around the building.

It’s not that we don’t like personal service along with our modern comforts--just ask any Westerner who’s had the pleasure of visiting developed areas in China lately. No, we live in a self-service society because we don’t like paying the surprisingly high cost of American low-wage workers in the form of more expensive goods at checkout time.

(Note: I’m not passing judgment on anyone here, I’m just trying to explain one of the many economic realities shaping our future that we must learn to cope with. More on this topic later.)

Saturday, November 20, 2004

'BLUR': When the Oracles Have Stock Options

At first glance, ‘BLUR: The Speed of Change in the Connected Economy’ looked like an exercise in buzzword branding—a 7 Habits of Highly Effective Dot Coms, as it were. Perhaps it could not be helped, published as it was in 1998, during the gilded age of ‘irrational exuberance’ (the authors actually discuss why this very phrase is misguided) and creative accounting that made WorldCom (also extolled) seem like the perfect role model for aspiring entrepreneurs.

The first half of the book didn’t do much to improve my impression, since authors Stan Davis and Christopher Meyer seemed to think it would be very out-of-the-box to stud every paragraph with the word BLUR (always in all-caps) used as verb, noun, and super-duper mission statement. I hung on because there were some interesting tidbits mixed in, and it was enjoying to read what is essentially a futurism text written in the language of business and economics. Buyers are becoming sellers. Sellers are becoming buyers. Small businesses are gaining big-business reach. Services are becoming productized, and products are increasingly tied to services. I doubt much of this would be new to anyone who works in business management or investment today, but it was just fresh enough to keep me going.

By about midway through, Davis and Meyer finally hit their stride. The BLUR count drops, as do the number of proactive ‘Does your business do this?’ questions. In the most interesting section, the authors apply their earlier conclusions regarding the changing relationships between buyers, sellers, competitors, and investors to individuals. They take the philosophy of a person-as-enterprise, not worker-as-employee. In the new economy, they argue, an individual must see every job and relationship in terms of its future value; the position you have today may be at least as valuable to you for experience and cachet gained as for wages and benefits earned; the friends and contacts you made at your previous jobs may me more valuable to your current employer than your job skills. Intangibles like these have value, and markets will increasingly find ways to price them and trade them.

In fact, BLUR describes a future where individuals routinely sell stock options in themselves, sharing slices of their future earnings in exchange for cash up front. As this would be a relatively high risk loan, they argue, it won’t take long before personal investment mutual funds spring up, investing in groups of workers, spreading out the risk and opportunity to create reliable performance. Just make sure you retain control of your destiny by keeping at least 51% of your stock. There’s a bit of advice your parents probably never gave you…

It’s all pretty stimulating, but since we now walk through the graveyard of bubbles long burst, I couldn’t help have serious doubts about how far I could trust these economic oracles. Was the New Economy just a dream? I knew just the way to phrase this question to my browser.

The answer arrived in a dialogue box a few seconds later: The book’s vaunted companion website—interactive, innovative, in a word: BLUR—is no more.

Friday, November 19, 2004

Signs of the Times

How will you know when the Singularity has arrived, or is about to arrive? How would you recognize that technological progress has become too fast for you to comprehend?

When you can no longer tell who, if anyone, is joking in a conversation like the following.

This is a snippet of actual dialogue from the SL4 chat room today; I have changed the names of the parties to protect the innocent (as a rule, logs of this room are not to be posted).

Horace: BCI is making big progress lately
Louis: what is that, enlighten me please :)
Louis: *?
Horace: Brain-computer interface. Who needs keyboards anymore?
Louis: ahh
Pringle: What progress is that?
Horace: Google something about sending emails with your thought
Louis: I've been waiting for those, can I buy such from the local market? (I rarely visit those)
Horace: Not yet, I'm afraid
Louis: aww :(

Thursday, November 18, 2004

'The Spike' Wins a 'Mitchy'

Every once in a while a book comes along that causes me as a reader to feel a profound sense of gratitude to the author: a book of concentrated insight representing the apex of a massive aggregation of scientific and scholarly study—the layman’s payoff for the esoteric labors of thousands. Two examples of in this category would be 'Guns, Germs and Steel' by Jared Diamond, and 'The Moral Animal' by Robert Wright, explaining the more interesting and useful conclusions of anthropology and evolutionary psychology, respectively, along with a host of related subjects.

'The Spike', by Damien Broderick, is the latest tome crossing my desk to earn the prestigious Mitchell Howe Gratitude Award for Explanatory Excellence in Service to Humanity. This book is perhaps best described as an arena in which the top competing concepts in futurism are carefully introduced and pitted against each other in brief vivisectional combat. The most consistent champion is Broderick’s scarcely disputable thesis: for better or worse, our world as we know it will disappear during the first half of this century—the inevitable result of accelerating technological advancement. We find ourselves at the elbow of a slope that ‘spikes’ ahead of us with staggering steepness, whether we are graphing the trends in biotechnology, automated manufacturing, nanotechnology, artificial intelligence, or some combination of the above.

Unlike the better known (at least in America) future studies of Ray Kurzweil ('The Age of Spiritual Machines', 'The Singularity is Near'), 'The Spike' never comes across as sensational, dumbed-down, or one-sided, and should therefore find a much more receptive audience among the highly educated and highly skeptical. As an unavoidable consequence, however, 'The Spike' may make difficult reading for those who do not meet a moderate threshold of general and scientific literacy. I recommend 'The Spike' to anyone who has graduated from Kurzweil, or who could expect to test out of it, given the chance.

Broderick is nothing if not thorough. He confidently marches the reader through canonical transhuman topics like uploading, immortality, and the Fermi Paradox, and continues straight into Shock Level Four territory, with discussion that includes Jupiter Brains and Matrioshka Brains, until he’s reached the intersection of the singularity and cosmology with Tipler’s Omega Point concept. Given the patent unpredictability of anything past the creation of greater-than-human intelligence, we might call this last stretch of Broderick’s journey a well-grounded study of wild-speculations, if such a thing is possible. Whatever it is, it’s fascinating.

But he’s not finished there. He concludes with no fewer than twelve scenarios in which the Singularity/Spike is reached, or not reached, as a result of roadblocks and opportunities along the way. The reader is then free to wade through the plentiful endnotes backing up various points of the text.

Authors like Broderick are the reason I’m so happy to live near a decent library again. In fact, this reminds me: I have another two year old wish to fulfill. Let’s see…

Yes. 'Transcension' is checked in…So, if you’ll excuse me…

Wednesday, November 17, 2004

On Colds and Fluids

The baby has been sick this week with a nasty cold. After a few days, I started coming down with symptoms myself. Determined to do what I could to get over it, I decided to make sure I took my daily multivitamin and drank plenty of fluids. But wait, I thought, ‘drink plenty of fluids’ sounds like it might be one of those tenets of medicine that’s just been passed down through the years without any scientific basis. It’s obvious that drinking is very important if you have, say, repeated episodes of diarrhea that make you a dehydration risk. But on the other, people with run-of-the-mill colds don’t usually have diarrhea, and often have the opposite problem. Is there any scientific justification for not following your body’s internal cues of thirst in this matter?

I asked Google for wisdom.

The only thing I could really find was a commonly cited study early this year (as in this article) offering some evidence that drinking extra fluids may actually do more harm than good, because respiratory illnesses tend to trigger a water-retaining reflex; increasing consumption on top of this may upset the body’s salt balance. The study was very far from conclusive. Doctors generally say they dispense this advice in the belief that it reduces congestion by thinning mucus secretions. This would make the patient more comfortable, and also reduce the chance of a secondary sinus infection. Coming from a family where secondary sinus infections are the norm, I decided, on this occasion, to go with the fluids. I didn’t drink like a fish, but I made a conscious drinking effort that definitely upped my hydration level, taking care to not drink too much in any given sitting, and to eat small snacks to keep from excessively diluting my salts. I don’t think I’ve made this kind of effort on any previous cold.

I’m not really surprised that this common wisdom has never been carefully studied. There’s no direct commercial interest in doing so. But one would think this would be the kind of study a government agency or non-profit could commission. After all, the common cold supposedly costs the U.S. economy $40 billion annually. Anything to reduce the number of sick days, doctor visits and ineffectual antibiotic prescriptions would seem like a good idea.

And I’m curious as to how or why the water-retention response to respiratory sickness might have evolved, but this not a question I expect can be easily answered. There was probably nothing ‘common’ about colds prior to the development of agriculture, and I don’t know enough to say whether the water-retention reflex is triggered directly by the ailment or by an evolutionarily advantageous immune response. If it was an evolved response, what was the advantage?

Oh, the result of my ‘experiment’? Inconclusive. I’m just one person, after all, and who’s to say whether I had an actual cold robust enough to cause the usual grief?. But I seem to have gotten over this cold with ease, breathing easily during most hours of the day and skipping the gummy green horror of a sinus infection.

Now you know. I’m truly a blogger now.

Tuesday, November 16, 2004

'The Elements of Fiction Writing': Better Writing Through Chemistry

At some time or other, I’ll bet you’ve seen a chemistry teacher hold a class spellbound as he or she made simple adjustments causing solution A to suddenly turn clear, solution B to froth and boil, solution C to solidify, and solution D to explode. But have you ever seen a writing teacher get the same reaction, causing paragraph A to suddenly turn clear, paragraph B to froth and boil, paragraph C to solidify, and paragraph D to explode? The authors of the books I’ve read in the 'Elements of Fiction Writing' series make a habit of such educational showmanship.

Despite my long running interest in the craft, I had not independently sought books on the subject until recently. My experience with English and writing classes in high school and college had given me the erroneous impression that such texts were merely vague motivational tools; that a mastery of the storytelling craft could only come from within after absorbing the highly anthologized works of others. In retrospect, it’s obvious that books of distilled technique do not fit easily in the curriculums of a large multi-purpose language class, and that it’s much easier for a teacher in that position to fill a syllabus with an annotated anthology than with a concentrated guide to technique.

The 'Elements' authors have felt my pain. Like most every writer whose advice I have sought on the web, they all seem introduce themselves by telling about how they were pulling heartfelt-but-troubled manuscripts out of the depths of their soul until some kind veteran took them under their wing and schooled them in the tricks of the trade. What? You mean good writing can actually be learned by something other than failure?

In Plot, Ansen Dibell, masterfully addresses what had been my biggest pre-writing concern: building a coherent plot skeleton on which you can hang the manageable building blocks of scenes. This is the skill of crystallization, of creating a basis for choosing which ideas to pull out of the intimidatingly infinite possibilities of your own imagination.

In Jack M. Bickham’s Scene & Structure, plot is analyzed in finer grain. Everyone knows that a story has rising action and falling action, but what do you build this roller coaster out of? Scenes! And Bickham shows you exactly how to build them. I don’t think anyone could have done it more cleary, either, since his is the high-octane, no-nonsense style that brought us brisk novels like Twister.

Orson Scott Card shows you how to get personal with Characters & Viewpoint. Card was my reason for checking out this particular series in the first place, for I know of no other more skilled at making me care about his characters. By learning his techniques you will be able to put the experiences of yourself and your closest friends into stories in ways so twisted nobody will ever be able to tell; you will have created new and believable people to populate your universe. Card also succinctly demonstrates the tradeoffs you will make when you pick a particular viewpoint in which to tell your story.

Bickham gets a second chance to impress with Setting, and he succeeds again. Setting is often disparaged in writing courses as something most amateurs already overdo on their own. Bickham does not flinch from the peril, and successfully teaches how to use setting to enhance, rather than bog down, a story.

In a similar vein, Monica Wood’s contribution, Description, elucidates the factors that you should use to determine how much description to use on a given occasion. This volume also deals very competently with what most others would probably lump under style. But in any case, Wood knows her craft and her examples are great.

Beginnings, Middles & Ends, by Nancy Kress, explores the same topical space as Plot, but from a different enough perspective to make it worth reading anyway. Different writers have different styles and ways of approaching a problem, and something can be learned from each.

Conflict, Action, and Suspense could hold its own as a stand-alone book, but I don't think William Noble’s volume adds much to this series. I feel that most of his points are made more clearly by the other authors.

Lewis Turco’s Dialogue is a another soft spot. (I already felt pretty competent in this area, so he had a tougher job to begin with.) Turco covers the territory, but by writing his entire book as a dialogue about dialogue, he trades clarity for cleverness. Every part of every page becomes example, but without the focused precision of the example snippets used in the rest of the series.

(There are 2 other books in the series; one, Manuscript Submission, won’t help me yet, and the other, Voice & Style, is not carried by my library.)

Each volume in this series weighs in at a lean 160 pages or so, a good length for reading in a day, though some of the authors strongly recommend reading over days or weeks between writing sessions.

A great thing about how-to books on writing (versus how-to books in general) is that they are written by people who know how to write. Each author here is an excellent explainer, and each keeps their book interesting with personal anecdotes about finding the solutions to story problems. But it’s their examples that reveal these writers as masters of the ‘Elements’. With just a pair of sentences or paragraphs, they dazzle and amaze, surprise and teach. They’ll earn your ‘ooh’s and ‘ah’s.

Monday, November 15, 2004

The Polygon Fallacy and Repellent Realism

Alvy Ray Smith, one of Pixar’s founders, is often quoted as stating that a computer would have to process 80 million polygons a second before it would be indistinguishable from reality. While I have no idea how he settled his figure, I don’t doubt have any reason to doubt it. It’s easy to use this figure improperly however, in a type of “necessary but not sufficient” mistake we might call the “polygon fallacy.”

As a glaring example of what I mean, virtual reality pioneer Mark Pesce, in his book The Playful World: How Technology is Transforming Our Imagination, tells the story of Sony’s PlayStation 2 as a quest to come closer to the 80 million figure. The end product fell a good 60 million polygons per second short of this holy grail, despite being 50 times faster than the original PlayStation. (There are many ways to measure such things. I’m just using Pesce’s numbers because I have them in front of me.) But the PlayStation 3 is supposed to be 1,000 times faster than the PS2, he says, going on to conclude that, if this is so, then “the images it generates will be completely indistinguishable from the real world, as complex and as rich as anything you might encounter in real life.”

Whoah, partner! Where is this complexity and richness supposed to come from? Someone or something has to create that level of complexity before it can enlist a machine to render it. It’s just not practical to do this today, and there are in fact good reasons to not jump at every opportunity to increase realism. The better game design firms, such as Blizzard, don’t load their gameplay or cut-scenes with the motion-captured images of Hollywood actors, although it would be cheaper and easier for them to do so. In the context of most game universes, which are necessarily—even intentionally—much less realistic, such tricks draw too much attention to themselves. They make the necessary and voluntary suspension of disbelief more difficult for the gamer to maintain.

This is a lesson the creators of the new movie The Polar Express might have wanted to take into account, as they captured an unprecedented amount of data from actors’ faces (mostly Tom Hanks) to actuate various characters in the film. This technique produces the most realistic animations of people ever, but at what cost? I haven’t seen the movie yet, but if the reviews are to be believed, then the effects wizards have managed to hit that known sweet spot of creepiness where the mind is simultaneously drawn inward by realism and repulsed by subtle imperfections. Pixar and others have deliberately kept their characters more cartoony for this reason. Is it not better to harness reality only inasmuch as it draws the audience into the story? Is it not wiser to avoid state-of-the-art gimmicks that merely draw attention to the medium?

That said, I do expect the richness and complexity of game environments to increase over time. As textures, shading, and physics improve, motion captured actors will not seem so out of place. Better tools for digitizing the properties of actual materials and the contours of actual locations will also help bridge the already narrowing gap between production techniques effective on film and in games. But for the foreseeable future, gaming environments are just not going to be mistaken for television programs, no matter how many billion polygons come screaming at you.

Sunday, November 14, 2004

'Tomorrow Now': A History of the 20th Century

I've read some of Bruce Sterling's better short fiction, and heard a recording of him speaking on the subject of the Singularity. (He seemed to understand what it was all about, even if I found his cavalier attitude towards it rather disconcerting.) These were the colorfol, if narrow, visions of one who had obviously done his homework, and I was curious to see what kinds of images of our future he might project on the wider screen of a non-fiction book. So I recently checked out Tomorrow Now: Envisioning the Next Fifty Years from my local library.

It's a bold title, and his first chapter, on biotechnology, lives up to it. Sterling provides a detailed montage of a new lifestyle: a generation that embraces and guides the diverse micro-ecologies around us and within us, instead of indiscriminately assaulting every microbe with broad-spectrum antibiotics, toothpastes, deoderants, etc.

Had he maintained this assertiveness in his other six chapters, on education, love, politics, war, business, and aging, Tommorow Now would probably be the premier futurist survey on the market. Instead, by chapter two he has seemingly cast off the shackles of editorial oversight, coasting along on timid, wandering note-dumping and ruminations on the character of modern life.

He discusses love in terms of relationships he's had with technology itself over the years. A history of the German Reichstag building becomes an extended metaphor for the evolution of political systems in the last half century. He personifies modern war through biographical study of a few celebrity rebels ("martyrs", "terrorists", "freedom fighters", depending on who's side you're on)--the kind of warriors that give today's traditional armies so much trouble.

I don't want to imply that there's anything wrong with his analysis of 20th century history; it's as insightful as anything I've read, and makes good reading in its own right. Unlike many futurists, Sterling obviously understands the importance of taking more than just technology into account when building projections. It's less glitsy, more complicated, and horribly difficult, but ultimately produces the most reasonable results. All futurists are wrong, but some are more obviously wrong sooner.

But while playing it safe is understandable, Sterling fails to deliver on a stated promise. Tomorrow Now does not envision the next fifty years for more than fifty of its 320 pages. By failing to assemble his insights into a working futurism projector, Sterling misses out on the big payoff he might be capable of. He pours a bag of intriguing gears, springs and optics into the reader's lap, pausing only to give Andy Rooney-ish commentary before disappearing into the night.

Saturday, November 13, 2004

Sesame Street Report Card

When you're the parent of a toddler, you get to know your Sesame Street. It's fun to see what's changed and what hasn't since you were it's target audience. I was watching back when Snuffy was still imaginary (and silent?), and "Beets Beets Sugar Beets" was among the more memorable tunes.

It's impossible to tell whether my more sophisticated observations could've applied to the Sesame of a quarter century ago. Have the old-timers in the cast acquired new undertones, or was I merely incapable of seeing them as a child? Grover has a subtle edginess; he sometimes--not often, but sometimes--comes across as a Krusty the Clown in blue fur, tired of children and ready for more adult pursuits. Our old friend Bob somehow manages to look self-conscious in some of those embarrassing sketches he perpetually finds himself in, earning chuckles from my wife and I as he dances, clucking and pecking, in a chicken suit, or stars as the insidious Trianglebob Trianglepants. Bert and Ernie continue to bother me as they glorify Ernie's playfulness at the expense of intellectualism. And Big Bird seems to be immune to the passage of time, remaining 100% in character despite 30 some-odd years housing the same actor.

I'm even less qualified to remark on the newer cast additions, but I'm going to do it anyway. Rosita, the fluorescent blue monster from south of the border, immediately won me over when she played the blues on her Spanish guitar. Gabby, the human teenager, couldn't have a bigger smile if she had a run-in with the Joker; she uses this warmth to good effect in the classic style of a children's entertainer. She's cloyingly sweet, but definitely seems to enjoy what she's doing, and she does it well. Miles, the other teen, would've gotten along just fine on either the Cosby Show or a preliminary round of American Idol, but lacks conviction in his faddish pop star voice.

But what's this? I saw something that looked like sexual tension between Gabby and Miles... We'll have to keep an eye on those two. They probably shouldn't be allowed in Oscar's trash can alone together. Take it to another street!

The celebrity sketches don't seem to have room for mediocrity. Nora Jones gave an ingenious acting performance alongside her musical one in "Don't Know Why 'Y' Didn't Come," while Natalie Portman ran Hooper's store for a day with a sickening demeanor suggesting an underage prostitute at church. Was she coming on to Big Bird? Maybe size does matter.

Here's my quick, totally subjective, and highly selective report card for the Sesame Street I've seen this past month. I trust my fellow viewers will understand.

  • Thumbs Up: Rosita, Prarie Dawn, Bob, the number 16, the dancing vegetables, Nora Jones, Natasha, Aaron Neville, Beijing acrobats, tie dye, 'Hace Calor', Suzie Kabloozie, Seth Green, Dr. Feel.

  • Thumbs Down: Elmo, Baby Bear, Dr. Phil, Natalie Portman, Trash Gordon, the Noodle family, Grunchetta, Madlenka, 'Do De Rubber Duck', Global Thingy.
This entry has been brought to you today by the typographical symbol '#'.

Friday, November 12, 2004

The Diamond Age

It's almost embarrassing that it took me this long to get my hands on a copy of The Diamond Age: Or a Young Lady's Illustrated Primer, by Neal Stephenson. It's practically a pillar of that temple of nerdiness revered by my dominant brain hemisphere. Libraries, so otherwise sympathetic to my needs, kept thoughtlessly lending this book out to others. But my days of having to nod politely at the conversations of Slashdot types are now over, and I can stop plotting up ways to shift the subject away from Neal Stephenson and onto, say, their preferred Linux distro.

Last weekend I toured a 'compiled' future where nanotech moguls and Confucian judges commune in exotic locals ranging from floating neo-Victorian manors to the House of the Venerable and Inscrutable Colonel (aka KFC). I plunged into a customized fairytale conjured up on the fly by a charismatic educational tool: a Primer powered by a combination of exotic nanotech and sympathetic voice talent. I cheered the exploits of an underpriviledged girl whose improbable relationship with the book steers the plot towards a unique, sophisticated climax.

Stephenson's multithreaded treatment hearkens back to the days of Dickens. Given the aesthetic tastes of the neo-Victorians pulling many of our protagonists' strings, it is surely no accident that a few minor but lovable characters seem to enjoy forbidden extra-helpings of page-space pudding. Setting descriptions occasionally wax past the point of polish into thick veneers that threaten, but fail, to dull the shining stories within them. And some people ride horseys!

For a jaded reader like me, 'The Diamond Age' was a real gem. It sparked real emotional involvement in a fully-rendered future. And unlike so much science fiction I read these days, the obligatory orgy scene had a point.

I've never felt closer to my poindextrous brethren. Time to explore the strange new world of 'interlibrary loan'.

Wednesday, November 10, 2004

My Other Blog

My other blog, Evolution by Choice fell into disuse when my life took a turn for the stressful. I was also beginning to feel bogged down by upcoming questions for the Singularity Q&A I was building there -- questions in areas I had hoped to be more technically grounded in before answering for a lay reader.

Two moves, two jobs, and a baby later, I'm getting back into the blogging scene. I loved the layout I made for EBC, but I wanted convenience and comment capability this time around.

As my profile currently states, I'm in the process of trying to migrate into a career quite unlike the one my degree as a history teacher qualifies me for. Software architecture is the general area I have in mind, and I've started using the ADU curriculum to ground me in the fundamentals. I spent the first part of the year learning the basic math I had forgotten or never learned in high school. I also gained a functional understanding of the Python programming language. Now I'm slogging through calculus.

At least, that's the idea.

I'm currently the primary caregiver for my son, in an interesting twist that also has my wife working at the same kinds of jobs I was doing just a few months ago. (The moccasins are truly shared in this family.) Jason doesn't take kindly to me mentally spinning up to technical subjects like calculus, so there's a lot of screeching and squealing and not a lot of traction when I hit the books.

But I'm determined to be productive in any way I can. I've largely given up computer gaming this year, as much as it pains me. Though difficult, I find I can still read and write in the short, unpredictable bursts of time full-time parenting affords. On good days I can read one or two hundred pages and write one or two thousand words. On bad days... well, I'd rather not talk about those. Isn't that right, Jason? wvssssn (<--- The more attentive among you may notice that Jason uses Dvorak, just like his old man.)

What am I reading and writing, you ask?

Well, the writing bug bites and tugs at my flesh from time to time in a manner not unlike Jason, and this is one of those times. I've wanted to write sci-fi since my teenage years, and though I keep dismissing the urge as utterly impractical, it always comes back. So during this bout, I've read a series of books and websites on the subject of writing. I've also sampled the state of sci-fi over the past year by reading a couple of hundred short stories and various novels. At the keyboard, I've been banging out notes for my current novel concept, as well as short scenes and dialogues intended soley as practice fodder. You'll hear more about what I'm reading and writing in later posts, I'm sure.

Whatever I'm working on, I plan to talk a lot about problems that readers can help me answer if they're feeling charitable, so I thank you in advance for putting up with my ignorance.

Too bad you can't help me find Jason's sippy cup.