Once computers were very big (an entire floor of a large office building) and very fast (or so it seemed to puny humans). Today, they are much bigger, not in real size but in memory and storage. They are also much faster—in real time. Their speed is beyond human comprehension, like the speed of light.
Computer memory and storage have become so big, using it has almost zero cost. As a result, software makers feel free to be needlessly wasteful. Computers have also become so fast, the same software makers can afford to be mind-numbingly inefficient.
Hardware doesn’t just grow in power and speed, it also adds new gizmos. Cameras, touch-screens, and the ever present Internet weren’t even dreams in the beginning. However, we still need software to operate the new gizmos. And wasteful and inefficient software cannot produce smarter computers.
The overabundance of storage and speed subverts the incentive to improve software. Without the environmental pressure from limited resources (e.g., storage and speed), this won’t happen as it does in nature. To evolve, software needs to keep the best from the past, try new ideas, and keep what works.
So while hardware inevitably gets bigger, faster, and does more, software will not become smarter. (I’m talking about real capability, not clever gadgetry or advertising puffery.) It will not improve as long as they can keep making money doing what they’ve been doing.
After all, money is the sole yardstick computer makers use to measure the success of their devices. Whether they are better products in any real sense—efficiency, productivity, true functionality—is entirely irrelevant. What really counts is that they appear, as the old slogan says, “new and improved.”
Therefore, computer makers focus on advertising: words and pictures designed to fool people into paying more to acquire far less than they expected. Their disappointment can be alleviated only by buying the next “new and improved” device. Hardware gets better, but computers will never be smarter than their software.
May 20, 2013
May 13, 2013
The Internet connects everyone and everything, but getting to the thing (or one) you want is sometimes more frustrating than having no access at all. The big roadblock on the Information Superhighway is the failure of search engines to retrieve efficiently.
They may look powerful, returning tens of thousands (or millions) of hits in fractions of a second. So what? That’s not how search speed should be measured. The clock shouldn’t stop until you’ve retrieved your information.
What counts is not how fast you get off the line (0-60), but how long it takes to get to your destination. Search engines give you a gargantuan haystack, but it’s up to you to dig in to find your particular needle. Showing you all these hits very, very quickly is barely the first step of a long journey.
To get where you want to go, you actually have to look at these hits. Often, you sit and wait and wait until a page loads. Search engine speed means nothing when it takes tons of your time to examine just a few pages. And you lose more time on pages that make you wonder what they have to do with your search.
The answer is in three letters: SEO. No matter what you’re searching for or which search engine you use, what you get are never really the best matches. Many hits are there simply because their position was manipulated by Search Engine Optimization.
SEO companies constantly pedal this service. Even Internet Service Providers push SEO. The goose keeps laying golden eggs because every search engine has different criteria and they keep changing them. If you merely want to maintain your place in the hit list, you must keep paying for new optimizations.
It’s legitimate for sellers of product X to want to show up higher in a search for that product. But optimization is being abused by companies selling Y and Z, not to mention a variety of Snake Oils! These irrelevancies only waste your time. They’re detours and potholes on your way to find product X. Encounter enough and you’ll never get there.
May 6, 2013
Last week I navigated three government-related websites. Two were what you might expect, but one was surprisingly nearly perfect. But the real surprise was realizing it had been a long time since I’ve seen any site work so well and so simply. It’s purpose wasn’t simple, but (almost) everything worked just as expected.
That’s how everything should work, not just websites. Not just technology. Tools that do not work as expected (especially after reading the instructions) are positively maddening. Yet making things to work as expected is not how designers are taught. They are told things should be intuitive.
They are, but there is a big difference between an intuitive interface and one that works as expected. To being with, the two words are not remotely synonymous. Intuition is a hunch. Expectation is a likely possibility; intuition is a guess and expectation employs reason.
Specifically, the dictionary tells us intuition is knowing without the use of rational processes. The racetrack tells us that people who rely on intuition and play hunches lose much more than people who play the odds—the expected probabilities.
Speaking of numbers, the phrase “intuitive Interface” gets 17 million hits on Google. Obviously, we’re constantly told interfaces should be intuitive. Or should they? In fact, there is a growing backlash against the term, and it’s being called “the I-word.”
Saying an Interface should be intuitive is worse than bad advice, it completely misleads designers. The only intuition available to designers is their own. The interfaces they create are for others to use. Using the word intuition inhibits designers from thinking about other’s expectations. It becomes a narcissistic mirror, obscuring the user’s point-of-view.
For decades, designers and users have been told interfaces should be intuitive. Nonsense. Intuition is far too subjective. Interfaces should behave as reasonably expected. Designing for intuition may sound glamorous, even mysterious, but designing for expectations gets better results.
April 29, 2013
Upon reflection, last week’s post seemed a little harsh on Web page programmers. Maybe, but the last paragraph spread the blame among “a great many people involved in web pages . . . ”. Like who? Like you. (Poetry trumps grammar.)
I’m not saying you are directly responsible for (if I may borrow David Letterman’s phrase) stupid programmer tricks. Not directly. However, indirectly, we are all responsible to some degree if we don’t complain when we encounter needless stupidity. Honestly, I’m only writing this post (and much of this blog) because I don’t see or hear anyone else complaining.
I’ve been making Web pages for almost 20 years. Thousands. The learning curve was steep at first, with a new programming language (worst I had seen in decades) and grossly inconsistent browser implementations. However, the best examples were easy to find, and source code easy to steal. (Code was simpler back then due to fewer code generators.)
Although changes came rapidly in both languages and browsers, pages kept getting better and inept designs dwindled—until a few years ago. Since then, a number of factors have been lowering page quality and promoting incompetence: out-of-date code generators, unwillingness to learn the basics of CSS, and foolish ideas becoming fads.
Keep in mind the culprits creating this crap are nowhere near the top of any organizational chart. Yet, no one above them (or even near them) saw this stupidity and did anything about it? No one? Really? That, even more than the specific foolishness, is the real problem.
It’s not simply that some wannabe web designer does some really (and I mean really) newer, dumber thing. We’re all human and make mistakes. Errors will slip through, despite everyone’s best efforts. But if junk persists and proliferates, then the fault lies with management.
Bad pages are more than programmer’s mistakes. It’s worse than that. Did no one connected with this company and/or its website see the problem? Did no one—in the company or on the Internet—realize it was stupid? Did no one speak up? Did you?
April 22, 2013
April 15, 2013
The last post outlined data’s three essential qualities: veracity, accuracy, and precision. Humans are good at the first two; computers excel at the third. However, we cannot judge accuracy without first determining veracity, i.e., how the data corresponds to the real world. How can we be sure? Most people would say Science.
True, but most people don’t fully understand what is meant by Science. Ancient Greeks depended on their powers of reasoning. The Middle Ages saw the rise of Empiricism, questioning established truths. Thus began the Scientific Revolution and the evolution of Scientific Method.
Most people (and some scientists) don’t always employ the Scientific Method. They use their feelings and intuition; they go with their gut. On occasion, they make the effort and put on their reasoning hats. Unfortunately, they don’t understand reason any more than they understand Scientific Method.
In Star Trek episode 38, “Metamorphosis,” Spock says, “. . . humans are essentially irrational.” At times we are, but maybe quasi-rational would be a better label. We act as though we are rational, ignoring (or denying) any flaws. Like the confident drunk, our perceptions are distorted. We are blind to the limitations of our rationality.
Even without mental impairment, how are we to know if our thoughts correspond to reality? The schizophrenic doesn’t know. The rest of us like to think we do, but we can’t be sure. That’s why we use Scientific Method. Human reason, by itself, is not enough. Neither is the logic of the computer.
Humans are not intelligent creatures independently. They must interact with other humans. To become human, children must be raised by humans. Children raised by wolves have the same brains but if not found in time by humans, they never become fully human.
We directly apprehend some of reality; for the rest we need other people. This is the Achilles Heel of Artificial Intelligence. AI, no matter how “superior,” cannot function on its own. It too needs people to help it know reality. In isolation, it is no smarter than the drunk who gets behind the wheel.
April 8, 2013
Data has three essential qualities: accuracy, precision, and veracity. Accuracy tells us if we are close to the target (in the ballpark). Precision exactly measures an answer (how many decimal places). Veracity corresponds to the facts (the real world).
The difference between accuracy and precision is easy to define, e.g, target shooting. A five-shot group is precise if it fits under a postage stamp. But such a grouping is not accurate unless it’s in the center of the target. If it’s not, it needs to be corrected for windage.
Errors of veracity are the history of warfare. The great US military power stumbled in Vietnam, whose most effective military technology was the bicycle. Despite a long historical record, the US also underestimated people defending their homeland.
Accuracy refines veracity. A parent says the stovetop is too hot to touch. Three-year olds quickly learn when it is too hot and when it is not (detecting heat at safe distance). No conceivable computer can duplicate this simple feat, i.e., discovering the accuracy of the facts.
Most living creatures perform comparable feats. They do so with brains that are tiny and slow compared to the so-called brains of computers. Living brains evolved primarily for veracity, and secondarily for accuracy. Precision is a product of civilization.
Humans interact with the world to learn veracity first and then accuracy. The computer cannot know the world (veracity) beyond facts we provide. It cannot know if its answers are close (accuracy) without our evaluation. Only then can it employ its one great skill: precision.
Recall that early computers were just high-powered calculators. Today they do much, much more. Yet, most of what they do is still calculation—larger, faster, and far more precise. They have no inherent ability for veracity or accuracy, That’s what we do.
April 1, 2013
In the beginning, electricity-sucking vampires were a good thing. Although standby power was promoted as “instant on,” the “on” wasn’t “instant,” but it was quick. More important than quicker startups of ancient televisions, continuous electricity extended the life of those big cathode ray tubes. How? By shrinking the temperature gap between warm up and cool down.
Vampires use a lot (10%) of power. Today’s digital devices don’t really need standby power, but all our modern solid-state hardware gain longevity from smaller temperature variations. They last longer using vampire juice. My computer gets power from a wall switch, but I let it warm up for 10-15 minutes before turning it on.
With the expanding world of digital choices, we must learn to be more judicious. For example, there is no vampire-like advantage in continuous connection to the Internet. Yet we do, whether online or off, whether our devices are on or off. It’s merely a habit born of convenience.
It’s why a device’s camera is always at the ready. Anyone in the world with permission (overloads) or know-how (hackers) can take a snapshot of you and where you are. They can even use your GPS (or WiFi triangulation) to get a fix on your location—and verify it with a quick wink of your camera.
The last post asked “. . . a machine couldn’t turn itself on, could it?” Actually, it can—and so can other machines (used by other people). While writing this post, Amazon updated my Kindle without asking if it was convenient. (It wasn’t.) If updates fail (and they can), why aren’t we offered cloud-based restoration?
We are captives of our habits and conveniences. Vampires may take a cut, but they extend the life of our devices. Overlords offer more but take more. They want control of our devices to manipulate our digital lives. I only wish this was an April Fool’s joke.
March 25, 2013
We first encountered evil computers in early science fiction, but no one saw any real reason to fear them because we could always pull the plug. TVs The Outer Limits tried to warn us, but we knew we were in control. Humans were superior to machines. Or so we thought.
What we didn’t think about was what actually constituted control. We assumed, incorrectly, it was our physical ability to flip a switch or pull a plug. After all, a machine couldn’t turn itself on, could it? We were naïve enough to believe control derived from power. What we missed was another form of control: influence.
Control does not necessarily require physical action. Dictators need not physically bully their subordinates. Full obedience is easy to achieve using the smallest of veiled threats. Control doesn’t need to be in your face; it can also be behind your back.
Influence comes in many forms and from many directions, including both above and below. It can come from near or far. It can be strong or weak, direct or indirect. Influence can be blatantly obvious or invisibly clandestine.
The machines’ influence on our behavior is irresistible. They are everywhere. Their influence is so pervasive, we aren’t even aware of it, just as we are unaware of the air we breathe (the clean air, not the smog). The tyranny of the machine is now the very fabric of our existence.
We no longer control our own machines. The erosion of that ability began with vampires stealing electricity. In just 50 years, this expanded to fill our wall sockets, extension cords, and power strips. The vampires continuously powering our machines are ubiquitous. And the overlords control this power.
Once, the powerful publicly embellished their power. Our modern overlords no longer extol their power, preferring to remain in the background. When we have problems, we blame the machines. We are oblivious of the concealed overlords controlling them—and us.
March 18, 2013
Yes, vampires. Or, if you prefer the less dramatic term, standby power. Many decades ago, when television screens were bulky CRTs (cathode ray tubes), they took a long time to warm up. Then it was decided to keep some power always on, enabling quicker startups. This feature was called “instant on.”
This standby power mode quickly acquired vampire status because leaking of electricity was akin to continual draining, not of blood, but of electricity. TVs with “instant on” were soon followed by VCRs, and then cable boxes—always using power. How much? It’s estimated to be around 10 percent of total residential electrical consumption.
Now, there are even more vampire devices in your house. Count the DC power adaptors, you know, those bricks powering everything from toothbrushes to external hard drives. While you’re counting, touch them to remind yourself that if it’s warm, it’s on. Oh, did I forget to mention the risk of fire from devices that are always on?
However, fire may not be the biggest risk. Every new device inherits the concept of continuous power. For example, there is no way to turn off a computer. Not the sleep or hibernate mode, the on/off switch on the box does not completely shut it off. If your computer is connected to the Internet, then it can be turned on (and used) remotely—unless you know the settings to prevent it.
Today’s vampire devices are no longer concerned with quick startup. Rather they want power to stay connected to their makers (of hardware and software). But if these devices are accessible to their makers, then they are also accessible to their destroyers, the hackers.
For many decades, I’ve controlled power to my computers by turning them off using a switched wall socket. This principle is common to those concerned with machines taking over. How often have you heard it said that we can prevent a takeover because we can always pull the plug. We believed this—but is it still true?