SocietalCode: Curated #1
News from the intersection of technology and societal change.
SocietalCode: Curated is a weekly feed of news, analyses, resources, etc. concerning societal change and technology.
I read broadly to keep myself up to date on the space, and pass along the best of what I find here. Intentionally presented with little analysis, it’s meant as a curated stream with a pinch of context, rather than my full thoughts.
(Texas also sued Facebook for the same violations in February.)
You can see Paxton’s 2022 legal campaign against Big Tech as sincere or cynical (the Texas law that was violated has existed since 2009, but for some reason it is only now that it is being enforced). Regardless, though, these biometrics lawsuits are important parts of society’s ongoing conversation on where the line should be drawn on recognition technologies.
The Republican National Committee has filed a lawsuit against tech giant Google, alleging the company has been suppressing its email solicitations ahead of November’s midterm elections — an allegation Google denies.
The study, based on emails sent during the U.S. presidential campaign in 2020, estimated Gmail placed roughly 10% of email from “left-wing” candidates into spam folders, while marking 77% from “right-wing” candidates as spam.
Even if Google confirms the study results to be true at scale, I doubt this case will produce anything surprising. Mundane, innocent, and legal explanations for the perceived behavior are easy to find here. I’ll be following it nonetheless, though, as it will a great case study of how innocent technical processes may be seen to have social impact at scale.
Now, I do not mean to compare Google and Meta’s use of facial recognition tech mentioned above to Iran’s use here. The situations are so different that any equivalency or comparison would cloud reality more than clear up.
However, as the conversation on biometric tech continues worldwide, it’s important to note the worst-case uses of it we can see playing out already.
On November 15, the human population is projected to cross 8 billion! Here’s a rather humbling way to see it happen.
Does Hazing Actually Increase Group Solidarity? Re-examining a Classic Theory with a Modern Fraternity
Using an American social fraternity, we report a longitudinal test of the relationship between hazing severity and group solidarity. We tracked six sets of fraternity inductees as they underwent the fraternity’s months-long induction process. Our results provide little support for common models of solidarity and suggest that hazing may not be the social glue it has long been assumed to be.
Now, one study does not a scientific consenus make, and frankly I do not know what the broader consensus is these days regardless. Maybe this study is just a contrarian take. Still, it’s always interesting to see evidence against what we consider common wisdom, though. It reminds us that our individual view of reality is just that, a view, not truth. Perhaps our beliefs are accurate, perhaps they aren’t; we ought to hold them lightly and use them with humility.
Note: The Russo-Ukrainian war has also led to discussion on the negative effects of hazing on group dynamics. The Russian military reportedly has had brutal endemic hazing, termed dedovshchina, dating back to its Soviet origins. I don’t expect it’s possible to do, but it would be fascinating to see a true in-depth study linking the practice to the overall apparent dysfunction of the Russian military today.
In the French Navy’s Exercise Polaris 21, “a Blue Force frigate was destroyed by a volley of 14 missiles fired at a location found on the Snapchat account of a sailor on the frigate.
Emphasis on “Exercise”, here; no actual missiles were fired nor frigate destroyed.
It’s hard not to laugh at anecdotes like these, the silliness of it all makes them difficult to take seriously. However, information gathered from social media has played a visible role in the Russo-Ukrainian war and there are serious consequences to this
First, this article should not be taken as truth. It only cites an anonymous Reddit comment, yet irresponsibly implies that this is a real, widespread phenomenon. It’s possible in theory, but in practice I would want to see more proof to believe that AI-generated content was actually earning good grades, as the anonymous commenter claims.
However, I still include it here as it’s part of a growing conversation around the value of AI-generated content, and the increasing difficulty of separating it from human-generated content (see GitHub’s Copilot and OpenAI’s Dall-e 2 for other examples of remarkably human content generators).
Overall, I don’t think there is much to worry about, though. There may be problems from this in the short-term, but the solutions we will find to deal with that will solve other existing problems of today. The problem must get worse before it will receive enough attention to get better. I hope to elaborate more on this in a future essay!
A good reminder to distrust by default when using online social systems.
Thanks for reading SocietalCode: Curated! Subscribe for free to receive new posts and support my work.
If you have comments, suggestions, criticism, or you just simply want to say hello, I would love to hear from you! You can always reach me by replying to this email.
See you next week.