Bots Have Feelings Too

“Charming and cute as they are, the capabilities and intelligence of ‘emotional’ robots are still very limited. They <strong>don’t have feelings</strong> and are simply programmed to detect emotions and respond accordingly. But things are set to change very rapidly. …To feel emotion, you need to be conscious and self-aware.” (Hewlett, 2019)

Book Inspired Post- The Cuckoo’s Egg

In The Cuckoo’s Egg, a major problem that the narrator of the story kept running into was the ‘not my bailiwick’ problem. To summarize briefly, as the narrator discovered the further and further sprawling reach of the hacker problem, he reached out (grudgingly) to pretty much every government agency he could snag by the earlobe. Whenever he did get one to bite, they’d stay on the line just long enough to tell him that his problem wasn’t there problem—or at least, wasn’t their jurisdiction—and pass him off to another agency. Who’d pass him off to another, who’d pass him off to another. They kept saying that since no money or classified data was stolen, and no damage was done to the software, there wasn’t anything they could do.

As someone who grew up with a phone in his pocket, it seems ludicrous to me. The guy’s breaking into military computers: that’s the definition of the FBI and NSA’s bailiwick. But on reflection, I realize that’s because I literally grew up being taught to think like that— hacking equals scary, hacking equals unethical, and hacking equals bad. But back then, when the internet was still a newborn discovering what it could do with its toes, no one had defined the boundaries. Steeling information off a computer, sure, that’s easy enough to label bad, but what about just looking around in someone’s files? Is your computer like your home, a place that is protected against trespassers, or is it more like a lirbary—something anyone can visit as long as they don’t start stuffing books into their pockets and walking out with them?

Someone had to answer that question (and a quick google of the CFAA told me someone did) but that makes me wonder what other computer ethics questions still need to be defined, and who’s bailiwick it is to define them. Think about The Social Dilemma, for example. Is it unethical to use algorithms and phycology to create addictive software, to sell people’s attention to advertisers like cattle in a barn? If yes, then what do we do about it—because, in this case and all other similar ones, if we don’t decide something, we decide nothing. If we don’t make it someone’s bailiwick, it won’t be anyone’s, and nothing will change.


Leave a comment

Leave a comment

Leave a comment