Over a few weeks, I’ve been gradually looking into setting up and running my own Twitter bot, so I’d like to introduce you to PyClaireH, my very first digital clone. She may be slightly… erm… sweary. In fact, this is probably a pretty close insight into what I’d be like if I drunk-tweeted.
Sorry.
What’s a bot?
A bot is, usually, simple software doing a simple job. Some bots are nice and some are less so. The good ones tend to be playing simulated characters in video games, producing art, crawling the internet for data, putting bids in on eBay for you, providing answers to customer questions, or holding a (hopefully convincing) conversation with you in some way. These latter “chatty” types are informally called chatbots. Meanwhile, the malicious bots are busily exploiting vulnerabilities in computer systems, crashing servers with artificially high traffic, sending out torrents of spam or abuse, scraping data they’re not allowed to collect, and maliciously impersonating people.
What kind of bot is PyClaireH?
Well, that’s an interesting question. PyClaireH (and, in the future, hopefully, RyClaireH) are chatbots who both produce communication and respond to certain linguistic prompts. However, PyClaireH is also intended to impersonate me. That’s hardly in the realms of malicious, of course, but it does have malicious applications, and that’s my interest: in the online arms race of fraud and counter-fraud, how well can bots pretend to be us, or more accurately, specific instances of us. In an experimental setting, for instance, could PyClaireH ever fool someone into believing that she really was me? And can we identify the linguistic tells that distinguish the ghosts in the machine from the humans? Continue reading