Is Machine-Grown Code Fit for Humans?
Recently (in Kevin Kelly’s What Technology Wants), I learned that MS Office has pieces of software created by so-called “code evolution” (i.e., what was originally called genetic algorithms). So now we’re intentionally creating code we are incapable of understanding. Never mind that such code is generated to “fit” (in evolutionary terms) a defined task. The problem is that task is human-defined and cannot possibly account for all its potential interactions with other concurrent code (from other sources).
That’s the general objection, but the particular objection is how effectively the generated code meets the defined task. How can you possibly completely define a task in an environment as large as that in which these tasks function? You may be able to provide a good definition of small tasks, but small tasks are working together to complete a larger task—and that larger task will likely have conflicts in the larger environment. So if some part of the code goes wrong, no human could debug what was written by a machine. Of course, it would be nice (and logical) if we had machines (i.e., programs) that could find these errors. But I haven’t seen a single shred of evidence so far.
Finally, the quality of this computer-generated code is totally constrained by the human definition of the task. And if you don’t know enough about the general principles and particular methods of defining problems—and the universe within which those tasks have to function as solutions—then why you should expect error-free results? But was that really Microsoft’s goal? Perhaps is was just to virtualize another job?