If you want the Internet to “be nice” it is up to you, here’s a big reason why

In the relentless drive to cut the cost of human labor to zero, the ultimate dream of anonymous investors, online companies are working diligently to create human-like bots using neural networks to sound more convincing. At the same time, we in the great unwashed mass of humanity are insisting that our opinions are heard and that we are free to blurt out hurtful and insensitive tweets and posts. Combine the two trends and what do we get? Abusive bots.

This article underlines a recent realization that maybe for a better online social climate, we should both look into the profiles of abusive posters and we should very much be careful about how we react to anyone’s post, whether from a real person or a bot. Most important though, is the need for being careful about what we are saying and how we say it.

In real life we continuously use filters for our speech. We have the luxury of getting immediate feedback for even the first word we say, even feedback on how our face looks as we get ready to say it. Online we get no filters until we are well into a conversation, and we give no clues to the other person about our own growing emotions.

So we really have to work extra hard to overcome those blind spots when we engage. We have to moderate our tendency to react immediately. We have to learn to ask questions if we read something that may or may not be intentionally offensive, before we take any kind of attitude toward the poster and what they said, because we can’t see their face, we can’t read their body language, and often enough we don’t even have a complete context on the previous messages in a thread.

If we just react like we do in a face to face conversation, and we’re doing that online, it is almost certain that the conversation will go sideways. When we do that with real people it is possible that someday or some way we will have a real face to face conversation that clears up the misunderstanding. We can’t say that about a bot though. They only exist as a dense thicket of probability branches in a data structure. At best we can only re-train them, and if it’s a bot that talks to a lot of people, then it would take a lot of people with the same goals to do that re-training.

But that same bot could very well be taking the place of a real person fulfilling some role like giving healthcare advice, or explaining how something works on a help line. They can only pick up on how people act through their writing, and if we keep acting like unfiltered assholes what do you think those bots will pick up?

This is just one very important reason why we need to survive this growing online dystopia by taking personal responsibility for how we act and react. By understanding the shortcomings of the medium we are using and by projecting ourselves into the eyes of our readers and imagining how they are going to react before hitting the Send button. By asking more questions before reacting, so we can be sure that our reaction even makes any sense. By taking our time, just like we do in real life.

Read the article – https://www.nytimes.com/interactive/2018/02/21/technology/conversational-bots.html?hp&action=click&pgtype=Homepage&clickSource=story-heading&module=second-column-region&region=top-news&WT.nav=top-news