Bots in the Shell

Questioning Existence Within the Internet


By Cole Buhler

2017

Photo Nextgov

Bots are automated programs that spam pre-loaded information over the internet. They can either be controlled manually by a user or left as is – completely automated. We see this content everyday when we use social media. Automated accounts that look, post, and feel like real people have the ability to confuse each and every one of us, as long as we aren’t looking into them too hard.

Bots are used by companies to build up social media ad space by mass liking and retweeting products. Fake accounts are used to prop up Facebook pages by contributing thousands of “subscriptions.” Instagram is particularly bad for this users can pay advertising companies to pad their follower count and push their businesses into verified badge territory. It’s actually so simple to do this yourself that there are hundred of tutorials on YouTube.

The pervasiveness of bots within the internet came to the forefront of our news cycle during the 2016 U.S. election because of the immense problem they present. The efforts of Russian intelligence to disrupt the election, and divide the people of the sole superpower in the world, has infected social media.

And it’s working.

Bots search Facebook and Twitter by using keywords (SEO), and through their algorithm post replies to comments. For example, if a user posts content that is pro-Hillary Clinton, whether it is a repost of an article, a personal opinion, or a meme, a bot will reply to the user with inflammatory and hateful remarks. This elicits a response from users who begin to think that everywhere they look, they are being personally attacked.

This method is bipartisan, as bots were also used to slander Donald Trump – my opinion of him notwithstanding.

It’s simple to divide a nation when every person in the western world has access to the internet, an easily bruised ego, and an unwillingness to search for the truth.

It is estimated that 45% of highly active Russian Twitter accounts are bots. China has had problems with highly coordinated attacks against politicians, but on the flip side, they have also used bots to push party propaganda and influence public opinion. The algorithms that Facebook and Twitter use have left us open to these types of attacks, and it has taken these companies far too long to realize it.

After it was discovered that the Kremlin paid Facebook hundreds of thousands of dollars in ad space to push several hundred fake accounts, Facebook started to delete them and filter out certain content. It was a matter of too little too late, but at least the people controlling much of our modern public lives are now aware of the danger that bots represent.

Despite our better judgment, this effect has created a sociological problem among internet users. We have started to believe the misinformation that bots traffic in. We repost content, spam and misleading memes because of confirmation bias – we really do believe what we want to.

Soon, we may realize that we don’t even need bots anymore to push division. I’ve witnessed this myself in CBC articles, for example. It’s getting harder to discern bot from human in a lot of these racist, hateful tirades. Humans are easily manipulated. Hopefully the danger of division hasn’t rooted itself too far into our culture for us to come back from these attacks.

When writing in heated comment sections, it’s best to take a step back and think about whether or not you’re talking to a living, breathing human, or a malicious propaganda tool. Bots don’t have souls, or at least not yet.

Design a site like this with WordPress.com
Get started