I write about technology at theluddite.org
Totally agreed. I didn’t mean to say that it’s a failure if it doesn’t properly encapsulate all complexity, but that the inability to do so has implications for design. In this specific case (as in many cases), the error they’re making is that they don’t realize the root of the problem that they’re trying to solve lies in that tension.
The platform and environment are something you can shape even without an established or physical community.
Again, couldn’t agree more! The platform is actually extremely powerful and can easily change behavior in undesirable ways for users, which is actually the core thesis of that longer write up that I linked. That’s a big part of where ghosting comes from in the first place. My concern is that thinking you can just bolt a new thing onto the existing model is to repeat the original error.
This app fundamentally misunderstands the problem. Your friend sets you up on a date. Are you going to treat that person horribly. Of course not. Why? First and foremost, because you’re not a dick. Your date is a human being who, like you, is worthy and deserving of basic respect and decency. Second, because your mutual friendship holds you accountable. Relationships in communities have an overlapping structure that mutually impact each other. Accountability is an emergent property of that structure, not something that can be implemented by an app. When you meet people via an app, you strip both the humanity and the community, and with it goes the individual and community accountability.
I’ve written about this tension before: As we use computers more and more to mediate human relationships, we’ll increasingly find that being human and doing human things is actually too complicated to be legible to computers, which need everything spelled out in mathematically precise detail. Human relationships, like dating, are particularly complicated, so to make them legible to computers, you necessarily lose some of the humanity.
Companies that try to whack-a-mole patch the problems with that will find that their patches are going to suffer from the same problem: Their accountability structure is a flat shallow version of genuine human accountability, and will itself result in pathological behavior. The problem is recursive.
That would be a really fun project! It almost reads like the setup for a homework problem for a class on chaos and nonlinear dynamics. I bet that as the model increasingly takes into account other people’s (supposed?) preferences, you get qualitative breaks in behavior.
Stuff like this is why I come back to postmodernists like Baudrillard and Debord time and time again. These kinds of second- (or Nth-) order “news” are an artifact of the media’s constant and ever-accelerating commodification of reality. They just pile on more and more and more until we struggle to find reality through the sheer weight of its representations.
Really liked this articulation that someone shared with me recently:
here’s something you need to know about polls and the media: we pay for polls so we can can write stories about polls. We’re paying for a drumbeat to dance to. This isn’t to say polls are unscientific, or false, or misleading: they’re generally accurate, even if the content written around marginal noise tends to misrepresent them. It’s to remind you that when you’re reading about polls, you’re watching us hula hoop the ourobouros. Keep an eye out for poll guys boasting about their influence as much as their accuracy. That’s when you’ll know the rot has reached the root, not that there’s anything you can do about it.
This article is a mess. Brief summary of the argument:
I think that this argument is sloppily made, but I’m going to read it generously for the purposes of this comment and focus on my single biggest disagreement: It misunderstands why LLMs are such a big deal under capitalism, because it misunderstands the interplay between technology and power. There is no such thing as a technological revolution. Revolutions happen within human institutions, and technologies change what is possible in the ongoing and continuous renegotiation of power within them. LLMs appear useful because we live under capitalism, and we think about technology within a capitalist framework. Their primary use case is to allow capitalists to exert more power over labor.
The author compares LLMs to machines in a factory, but machines produce things, and LLMs produce language. Most jobs involve producing language as a necessary byproduct of human collaboration. As a result, LLMs allow capitalists to discipline labor because they can “do” some enormous percentage of most jobs, if you think about human collaboration in the same way that you think about factories. The problem is that human language is not a modular widget that you can make with a machine. You can’t automate away the communication within human collaboration.
So, I think that author makes a dangerous category error when they compare LLMs to factory machines. That is how capitalists want us to think of LLMs because it allows them to wield them as a threat to push wages down. That is their primary use case. Once you remove the capitalist/labor power dynamic, then LLMs lose much of their appeal and become just another example of for profit companies mining public goods for private profit. They’re not a particularly special case, so I don’t think that it requires the special treatment in the way that the author lays out, but I agree that companies shouldn’t be allowed to do that.
I have a lot of other problems with this article, which can be found in my previous writing, if that interests you:
Investment giant Goldman Sachs published a research paper
Goldman Sachs researchers also say that
It’s not a research paper; it’s a report. They’re not researchers; they’re analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word “research” for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI “research” that’s just them poking at their own product but dressed up in a science-lookin’ paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I’ve written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would’ve noticed that it’s actually junk science.
Props to her, and this is intended as a friendly comment between people on the same side, but I think this can be dangerous.
Chomsky famously noted that brevity is inherently conservative, and that’s actually a pretty profound observation. Any time that you are brief to an audience that doesn’t have much context, your message is going to pick up conservative baggage. Just imagine debating someone on how American imperialism is bad in front of a crowd that has never questioned USA as the bastion of freedom and democracy in the world. Your opponent just has to say “freedom” and “support the troops” and “9/11” as pre-canned concepts with a lot of power and imagery, whereas you’re going to have to spend a ton of words unpacking all that. Any time that you say freedom, you’re going to have to explain what you mean, or the audience will interpret it as the canned American concept of Freedom™. This is something that the 19th and earliest 20th century anarchists and communists understood intuitively and talked about quite a lot, even if they didn’t articulate it quite as succinctly (lol) as Chomsky did. It’s everywhere in their revolutionary theories.
So, while I do think that it’s important to create effective and engaging short-form agitation and propaganda materials, they should be part of a larger messaging apparatus that leads you to some sort of more profound relationship with politics. Getting the entirety of your politics from short form video will necessarily lead to a shallow and mostly aesthetic understanding of politics, easily exploitable by reactionaries. It’s how you end up with the Red Scare podcast, or MAGA communism, or any of these other aesthetically pseudo-leftist but actually deeply conservative discombobulated ideologies.
edit: also meant to say that it was not a great interview lol.
Haha I was actually paraphrasing myself from last year, but I’ve seen that because lots of readers sent me that article when it came out a few months later, for obvious reasons!
I completely and totally agree with the article that the attention economy in its current manifestation is in crisis, but I’m much less sanguine about the outcomes. The problem with the theory presented here, to me, is that it’s missing a theory of power. The attention economy isn’t an accident, but the result of the inherently political nature of society. Humans, being social animals, gain power by convincing other people of things. From David Graeber (who I’m always quoting lol):
Politics, after all, is the art of persuasion; the political is that dimension of social life in which things really do become true if enough people believe them. The problem is that in order to play the game effectively, one can never acknowledge this: it may be true that, if I could convince everyone in the world that I was the King of France, I would in fact become the King of France; but it would never work if I were to admit that this was the only basis of my claim.
In other words, just because algorithmic social media becomes uninteresting doesn’t mean the death of the attention economy as such, because the attention economy is something innate to humanity, in some form. Today its algorithmic feeds, but 500 years ago it was royal ownership of printing presses.
I think we already see the beginnings of the next round. As an example, the YouTuber Veritsasium has been doing educational videos about science for over a decade, and he’s by and large good and reliable. Recently, he did a video about self-driving cars, sponsored by Waymo, which was full of (what I’ll charitably call) problematic claims that were clearly written by Waymo, as fellow YouTuber Tom Nicholas pointed out. Veritasium is a human that makes good videos. People follow him directly, bypassing algorithmic shenanigans, but Waymo was able to leverage their resources to get into that trusted, no-algorithm space. We live in a society that commodifies everything, and as human-made content becomes rarer, more people like Veritsaium will be presented with more and increasingly lucrative opportunities to sell bits and pieces of their authenticity for manufactured content (be it by AI or a marketing team), while new people that could be like Veritsaium will be drowned out by the heaps of bullshit clogging up the web.
This has an analogy in our physical world. As more and more of our physical world looks the same, as a result of the homogenizing forces of capital (office parks, suburbia, generic blocky bulidings, etc.), the fewer and fewer remaining parts that are special, like say Venice, become too valuable for their own survival. They become “touristy,” which is itself a sort of ironically homogenized commodified authenticity.
edit: oops I got Tom’s name wrong lol fixed
Am alternative approach that may interest you: https://theluddite.org/#!post/reddit-extension
I will always upvote Astra Taylor, and everyone with debt should join the Debt Collective!
I have worked at two different start ups where the boss explicitly didn’t want to hire anyone with kids and had to be informed that there are laws about that, so yes, definitely anti-parent. One of them also kept saying that they only wanted employees like our autistic coworker when we asked him why he had spent weeks rejecting every interviewee that we had liked. Don’t even get me started on people that the CEO wouldn’t have a beer with, and how often they just so happen to be women or foreigners! Just gross shit all around.
It’s very clear when you work closely with founders that they see their businesses as a moral good in the world, and as a result, they have a lot of entitlement about their relationship with labor. They view laws about it as inconveniences on their moral imperative to grow the startup.
This has been ramping up for years. The first time that I was asked to do “homework” for an interview was probably in 2014 or so. Since then, it’s gone from “make a quick prototype” to assignments that clearly take several full work days. The last time I job hunted, I’d politely accept the assignment and ask them if $120/hr is an acceptable rate, and if so, I can send over the contract and we can get started ASAP! If not, I refer them to my thousands upon thousands of lines of open source code.
My experience with these interactions is not that they’re looking for the most qualified applicants, but that they’re filtering for compliant workers who will unquestioningly accept the conditions offered in exchange for the generally lucrative salaries. It’s the kind of employees that they need to keep their internal corporate identity of being the good guys as tech goes from being universally beloved to generally reviled by society in general.
Whenever one of these stories come up, there’s always a lot of discussion about whether these suits are reasonable or fair or whether it’s really legally the companies’ fault and so on. If that’s your inclination, I propose that you consider it from the other side: Big companies use every tool in their arsenal to get what they want, regardless of whether it’s right or fair or good. If we want to take them on, we have to do the same. We call it a justice system, but in reality it’s just a fight over who gets to wield the state’s monopoly of violence to coerce other people into doing what they want, and any notions of justice or fairness are window dressing. That’s how power actually works. It doesn’t care about good faith vs bad faith arguments, and we can’t limit ourselves to only using our institutions within their veneer of rule of law when taking on powerful, exclusively self-interested, and completely antisocial institutions with no such scruples.
Yeah, as always, the devil is in the details. For now I think that we need a simple and clear articulation of the main idea. In the exceedingly unlikely event that it ever gets traction, I look forward to hammering out the many nuances.
It’s not a solution, but as a mitigation, I’m trying to push the idea of an internet right of way into the public consciousness. Here’s the thesis statement from my write-up:
I propose that if a company wants to grow by allowing open access to its services to the public, then that access should create a legal right of way. Any features that were open to users cannot then be closed off so long as the company remains operational. We need an Internet Rights of Way Act, which enforces digital footpaths. Companies shouldn’t be allowed to create little paths into their sites, only to delete them, forcing guests to pay if they wish to maintain access to the networks that they built, the posts that they wrote, or whatever else it is that they were doing there.
As I explain in the link, rights of way already exist for the physical world, so it’s easily explained to even the less technically inclined, and give us a useful legal framework for how they should work.
I seriously don’t understand how everyone is so confused about this. It’s actually not cool at all to continuously drop bombs on densely inhabited areas while also denying them water, food, medicine, fuel and so on. Any further analysis or complexity has to build on top of that fucking obvious reality, not ignore it.
Not a great headline (Current Affairs seems to do this a lot).
The actual argument presented is that medical debt forgiveness drives are actually harmful. It was once a kind of gimmick to both give people immediate relief, but also shed light on the industry (they reference John Oliver, for example), but now that it has become very popular, it’s actively funding medical debt collectors.
Thanks! It means a lot when people say nice things. I can see the server logs, so I know there are thousands of readers who presumably enjoy it, but I usually only hear from people who don’t, so thanks friend! I really appreciate the note.
Glad to hear it!