Economist.com: Hollywood has made at least half a dozen films based on Mary Shelley’s gothic masterpiece — mindless travesties all of them, even the Kenneth Branagh version released in 1994. That is a pity because the parable of the Genevan protagonist, Victor Frankenstein, deserves wider appreciation, especially among those concerned about technology getting out of control.
In the actual story, there is no crazed assistant, no criminal brain stolen from a grave, no violent rampage, and no angry mob hunting down and killing the monster. Instead, the rejected creation pleads to be accepted, and cared for, by its creator and tries hard to fit in with society. Yes, there is violence and revenge — it wouldn’t be a gothic novel without them. In the end, however, the autonomous being departs to commit suicide after its creator dies of disease.
What makes the tale such an enduring classic are the moral questions it raises about creation, responsibility and unintended consequences. The lessons are as relevant in today’s world of autonomous technology — whether driverless vehicles or surgical robots — as they were in 1818 when the melodrama first scared the daylights out of Georgian England.
Whether consciously or not, the Royal Academy of Engineering in Britain seems lately to have taken Shelley’s fable to heart. In a report published last week, the academy urges opinion-formers to start thinking seriously about the implications of autonomous technology — machinery that can act independently by replicating human behaviour. The intention is to have such machines do the sort of jobs people find dull, dirty or dangerous. Many such systems either already exist or are closer to reality than is generally realised. And right now, the ethical, let alone the legal, framework for dealing with any untoward consequences of their actions simply does not exist.