It's no secret that America has fallen into cultural decline in recent years, but the major question is when. You can argue many points, but I think the War on Terror is certainly a strong catalyst of the problem. Its influence has truly reached nearly every corner of daily life. I think the biggest impacts can be broken into a few parts, and I will be detailing my thoughts on some of these impacts
The most immediately obvious impacts of the War on Terror is surveilence. 9/11 gave the federal government an easy way to justify further surveilence of the citizenry, and being at war gave them proper justification to dump tons of money into any technology that could aid in "defeating terrorism", including our capacity to surveil innocent Americans. People love complaining about the deficit, but we could have saved a lot of money by dismantling or severely toning down our mass surveilence systems. Look at how expensive things like Flock Security systems are; these systems are certainly not cheap for us, whether it's artificially inflated by some AI startup or the genuine price we're paying to do it all in-house, so to speak. The biggest tech companies all have contracts with the US government. Elon Musk's whole gaggle of companies, NVIDIA, Oracle, Microsoft, Palantir, etc. all run off of your money (or money borrowed based on the belief that you, the taxpayer, will be able to pay it back some day) to some extent. This is all obvious and doesn't really need said, but it's still worth noting. Making these surveilence systems privately operated will surely have no consequences. It definitely will not lead to companies being lax on security in pursuit of profit and let random people look at a nearly infinite sprawl of camereas across the entire country. Oh, it already has? How unfortunate. We need to keep the cameras up so we can spot all the evil illegal Salvadorians that want to rape and kill your whole family and send them to South Sudan or whatever. It would be unamerican to interfere with private enterprise anyway. Financializing the surveilence state is actually pretty clever in some ways. It adds an economic penalty to attempting to curtail the system. If these companies were to go out of business, there would certainly be a negative impact on the technology industry. Note that the top 7 companies that comprise nearly half of all stock value in the American stockmarket are all tech companies, and all have some degree of investment in the outcome of AI. Now, instead of it being a governmental policy, the issue is one of business; it's technically not the government doing the surveilence, they just use the resources of a private company to assist their goals. This also makes them great targets for government bailouts. These companies provide valuable services to the government and they will make everything collapse if they fail, supposedly. The AI bubble may collapse, but AI is here to stay in one form or another (under the current order of things). We're already seeing AI use in writing police reports and AI-powered weaponry in proxy conflicts. These things may seem far from us in the west, but they're already being deployed on a small scale here. Insurance companies using AI to deny claims and raise premiums, AI-based fluctuations in prices with online shopping, all of these small things will build up over time. Assuming the whole country doesn't burn down or become uninhabitable first, of course. It's rotten to the core, but I wouldn't expect much different from the kind of people that smiled while sexually assaulting prisoners in Abu Ghraib.
The surveilence state has been financialized. The internet has become an important part of our lives, whether you like it or not. Monitoring someone's internet activity isn't too far from monitoring their home. Experts have said there are ways to predict mood episodes in bipolar patients using the fitbit, for example. I'm sure this will have benevolent usecases, but you would have to be rather stupid to not figure out the obvious malicious implications. I remember reports of a website using similar data to customize the suggestions of manic people, upcharging them and reccomending them items more likely to be bought impulsively. This is going to be widespread sooner or later if we don't make it illegal and viciously enforce the ban, which I highly doubt will happen soon with how enmeshed the current political system is with the tech industry. The "arms race" going on with AI at the moment will just make that bond even deeper. This kind of surveilence, for now, is largely for prviate benefit, but it isn't too hard to imagine a world where it's used adversarially by the state in a more blatant capacity. Let us not forget that the internet was a government project at first. If the AI determines that you have criminal interests (sexual deviancy, political dissidence, etc.), it could in theory alert the police and put you in prison. AI has already been used to falsely give people tickets. Once the time is right, this privatized AI system will be put to full use in governmental surveilence. As the internet becomes more and more important, this becomes more of a problem. Perhaps the solution is to just abandon the internet almost entirely, only using it for business. These same AI systems are being used to drive people to psychosis and use those vulnerable people to farm advertiser money in the form of sponsorships in the chatbot's replies. The business of tech really embodies a lot of the problems of the War on Terror. The tech companies abuse the loneliness epidemic that was kicked into high-gear by the great recession (which was made both much worse and more likely by the "patriotism" that enveloped the country; regulation and encroachment upon free enterprise were anti-American, as per Reagan and Clinton) that that they benefitted from to harvest more and more of your data and sell you increasingly worthless piece of shit products while making you even more socially atomized. Then they push gambling apps to capitalize on the cultural nihilism that comes with a continually worsening economy and a population that feels more disconnected from one another than ever before. These problems are exacerbated by tech companies and they profit massively from their continuation. If people weren't so lonely and miserable, they would use the internet less, which would give their systems less data to process, less of our souls to market back to us and sell to others. This merger of the public and private sectors is rather disgusting, at least in my view. It encourages cancerous social policy that benefits nearly no one economically and damages everyone socially. Perhaps I am simply mentally deficient and wrong. I guess it's not impossible
Surveilance has become a part of our culture, much more than in past decades from what I can tell. There have always been forces of this sort in our society; DARE was telling kids to report their parents for smoking pot, people were told to report potential communists, etc. The time we live in now is different, in my view, because we are socially encouraged to "report" our peers for doing things that aren't even ideologically opposed to the powers that be (in any explicit way, anyway) or seen as morally wrong really. I saw this one post the other day of a guy sitting near the window reading a book and drinking coffee. The poster made it clear that he found this to be disreputable in some way, for some stupid reason. What a miserable slave of a person. Can you imagine seeing a guy just reading a book in his OWN APARTMENT and thinking he's trying to show off? The poster was, ironically enough, being more performative than the person being photographed; the person in the photo was just doing what he wanted to be doing, but the poster made a big stink about it online in hopes of getting admiration from at least a few of the millions of people that go on instagram. I feel like people weren't doing shit like this even at the peak of the cold war. The vague threat posed by "terrorists" of all sorts provides a good justification for a societal paranoia. If someone cares about environmentalism or animal rights, they might be one of those pesky eco-terrorists that live in the Amazon rainforest because they just hate and envy rich people; if they are opposed to the rape and torture of prisoners, they must be pro-crime anarchist terrorists that think everything should be legal; if they think police murdering random civilians is bad they are clearly anti-white terrorists; the list goes on and on. The adoption of terrorism as the major threat removes the need or desire for any of these various people to have any relation to one another. They can be nazis or black supremacists or socialists or anything else, they just have to have a vague sense of being against the official stance of the US government or its friends abroad. When the threat was communism there was some level of internal consistency, but a terrorist can be literally anyone that believes anything. The war on terror reduced the cold war era culture paranoia down to its simplest form: a fear of the unknown, a distrust of the outsider.
This era of intensified paranoia has been paired with advances in technology that enable further algorithmic pattern recognition. Now, the algorithms profile their users and push them into poorly defined but noticeable niches. This process serves a few functions. In my view, it can be broken down into 3 major elements:
Advances in algorithmic targeting have allowed the platforms to create something of a psychological profile of their users and to use said profile to push only the most palatable, likeable content to those users. Past versions of mass media (TV, magazines, film, etc.) were ultimately limited in their ability to reach the right consumers. Their ideal consumer had to be in the right region, have the right interests, have money, all these other things. Nowadays, in the western world at least, most people have some degree of internet access. The only real barriers are taste and marketing. You can create the nichest shit in the whole world and it can reach its intended audience somewhat reliably. This hypertargeting of content results in users of these platforms being fed into a bubble of sorts where they primarily see things they will see and react to, either positively or negatively. The content consumed is forced into an increasingly narrow range. This expectation of hemogenous content naturally results in confusion, displeasure, etc. when it is not met. I can't help but imagine that this carries over to real life to some extent. As people increasingly spend more time online, the things they see online will matter more and shape their worldview more. This seems kinda old man yelling at cloud but I really think that, at some level, seeing a more homogenous and bland online world makes them less accepting of a colorful, diverse offline world. When you live out and about and see all the strange people in their strange ways you learn to live with it, for the most part. When you spend a fourth of your day looking at a carefully curated selection of robotic slop that depicts the world in a certain way, you'd think it would cause some level of discontent when the offline world was dissimilar. It strengthens that urge of pattern-seeking and xenophobia that the war on terror really brought into the spotlight, it trains us to notice when things don't fit in a narrow band of possibilities and rewards us for pointing it out. A great example of this is when men comment shit like Erm... Looks like I'm in enermy territory! on a post about women and they end up getting the most liked comment. Being a whiny baby about the algorithm getting your preferences wrong is socially rewarded. Or the way dating apps work; people are expected to treat dating and getting a romantic partner, one of the most important aspects of most non-aroace people's lives, the same way they treat youtube shorts, swiping if the thing presented isn't immediately attention-grabbing. This sets of behaviors gets more and more engrained as we use the internet more, especially for the kids around now that have been using it since before they could even read. The watchtime maximization comes with a long set of consequences that I will elaborate on at a later time, but for now the important thing is that it exacerbates these negative effects