In Theses 16, I like how Greenfield talks about the implicated problems of "Everyware". Instead of just listing endless amounts of scenarios he helps clarify by aligning the problems into three categories: Inadvertently, unknowingly, and unwillingly. I like how this really exposes what something like everyware will do. It's not that I was unaware of my personal objections to "everyware", it's just hard to state them. When discussing this in class, it's hard to not just give examples and say how we wouldn't like it. No why, just the fact it wasn't something we wanted.
The three categories, for me, help me illustrate what problems I would have. Whatever the technological advance may be, I can think, will this honor the will, knowledge, and intent of every person? It lays boundaries away from assumption. Now when someone thinks about creating a weighing system in a building they can think of whether it violates one of the three guidelines. Will everyone know is there and what it does? Will anyone mind participating? Will everyone know exactly what it does? Keeping this in mind, of course with the ever increasing security, everyware may not be so bad. Given that the technologies can not violate these three, why not? If I know it's there, and what it can do, and have the choice in using it, why not? The fear of "everyware" is from unintentional use, depersonalizing technology. While a urine analysis isn't exactly personal, if it doesn't force itself upon you, it's not exactly wrong.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment