Typically, when we think about films and writing about a robot overtaking, we remember sentient bipedal beings that are repulsed by the human condition. We rarely consider that a being with no embodiment could ever manipulate the human experience. However, arms, legs, and thumbs, are not necessary to gain a hold on a person’s mind. The mind drives our every action, and it can be influenced without difficulty and in a matter of seconds, especially when the antagonist is in our pocket.
The United States is currently undergoing an era of extreme polarization. From storming the capital to harassing poll workers, we have seen how passionate citizens currently are about their political position. To address these cases, there have been new efforts of censorship to keep people from posting crude ideas on the internet for others to follow. Though this may seem like an immediate fix, it begins to chip away at our first amendment right of free expression. I think that we can collectively agree that those whose careers depend on an audience do hold a higher degree of social responsibility. This responsibility can be defined by that person and their audience or producers. It is not reasonable or equitable to change community guidelines for all users on the terms of a single cultural philosophy.
Additionally, by asking consumers to change their behavior, we leave the actions of large developers unmonitored. Large software companies such as Facebook and McAfee have found themselves in trouble over data sharing in past years. However, there has been no accountability on companies for their use of algorithms to keep consumers attached to their phones.
Companies such as Facebook, TikTok, Twitter, and many others, use strategies to deliberately create an addiction to their feed. While a user is logged in to those websites, their interactions are being monitored to see what content keeps them scrolling. Interestingly, these companies have found that shock and dislike attribute more heavily to interaction than do any other emotional responses. People are more likely to share and scroll through comments on something that has caused them a negative emotional response. For this matter, a person is more likely to see a post about something they disagree with being mocked by someone that they agree with rather than just seeing what a person had agreed with to begin with.
The AI that runs the social media servers are programmed by algorithms that heighten user interaction and nothing else. For example, a post can have an average of 30 seconds of interaction while another has 10 seconds. The post with the 30 second interaction time will be picked up by the algorithm and gets pushed into the feed of those users who are likely to respond to that stimulus. Since interaction is tied to negative emotions, this means that we are constantly being re-fed content with negative connotations. The more that a person visits that website, the more that the AI learns about what makes them stay.
During times of human peril, such as with the pandemic, it is irresponsible to add an additional layer of negativity to the mind for profit. Owners of these development companies must be held accountable and should be stripped of the privilege to partake in such monitoring. It is not the responsibility of these companies to keep the people quiet. It is their responsibility to keep their algorithm from sending users on downward spirals.
So, no, an AI overtaking is not robots with canons for arms, it’s a constant feed holding up a mirror to the worst versions of ourselves; leading us to perpetuate the same patterns in the “real world”.
A documentary entitled, “The Social Dilemma” showcases developers who worked on this AI as they voice their concern for our future and regret toward their contributions. It is a great introduction to the topics that I have covered here.
Comments