I like to think of myself as moderately tech-savvy. I'm the go-to guy among my friends who are intimidated by "real" techs. Therefore I was surprised at my negative reaction to the idea of pervasive data networks. Does anyone else find the phrase "our capacity to be known by others and by systems" just a wee bit disturbing? Or downright chilling? I think Paul Ellerman and Linn Gustavsson among others appreciate some of the creepiness.
Intelligent Systems which have no evil intent are nonetheless programmed to protect themselves and advance their purposes - largely without moral restraint. Of course, there are medical advances. Sure, there is money to be made. But subjectively, I can't seem to shake all the dystopian images, and not just Sci-Fi about AI run amuck. Even non-tech systems (think communism, religion, even pragmatism) exhibit a tendency to be usurped by individuals who are driven by a conviction that they are right. Because of this passion for what they are convinced is the ultimate good, they are not constrained by morality in their pursuit of the power to force others into compliance with their vision. Or as C.S. Lewis put it, "a tyranny sincerely exercised for the good of its victim may be the most oppressive. … those who torment us for our own good will torment us without end, for they do so with the approval of their own conscience." (God in the Dock)
OK, after that Luddite rant, I'll try a more objective and pragmatic approach. There seems to be an inevitability about the advancement of neXtWeb technologies with invasive potential. Opting out, as Susan O'Grady observes, will hardly be an option. Even the Amish, with their religious aversion to worldliness, have only succeeded in delaying, not halting their adoption of technological advances. Our discussion needs to be along the lines of, "What are the ways we can help move this forward without destroying ourselves?" Rita's discussion about educators' influence may be one way forward. If we can't stop it, and can't opt-out, how do the altruistic develop a voice strong enough to counter the self-serving. (Is there actually altruism?)
Other Gleanings
Mendeley –totally new to me - introduced in the discussion forum on reference management programs. Looks like another must-try. Thanks to this course for alerting me to the existence of this type of tool.
Good link from Susan Grigor about Netiquette, although the tone in PLENK has largely been one of mutual respect and reasoned argument.
Great video by Kate Ray about Semantic Web - referenced by so many PLENKers I can't remember where I first found the link. Also read the great commentary by Dave Cormier analyzing the implications of the ideas expressed.
A couple of my favourite quotes from the video:
David Weinberger: "A little structure goes a long way if you combine it with, for example, a human being that had a lot of intelligence between his or her ears" (09:49) Do we really want computers to do it all so we don't have to think?
Tim Berners-Lee: (People are) "trying to make it work so much, they’re not going to imagine what things people will be able to do with it once it’s working and it’s well-deployed." (13:30) Sort of like the nuclear physicists who could ignore the horrors of nuclear weapons.
I think I'm picking up what the semantic web is about. Computers should not only store data, but understand what they store so they can intelligently sort, filter, and recommend it. To do so they need humans to change and standardize the way they enter data about the documents they store on the web. George Siemens says, "…if the web can’t be shaped to function as people think, then people must be shaped to function as the web operates. Human thinking and meaning-making are not machine-processable. Cognition is too messy and too contextual"
(Would a computer ever cross-link Tim Berners-Lee with Howard Beale?)