Barely Legally

confessions of a moot court bailiff

Dialectical Gems

Today, we have two different post-mortems of the mansplaining that occurs after a woman expresses an opinion. The first is a statistical analysis of the mansplaining prompted by Holly Wood’s rebuttal of some rich guy’s defense of income inequality. You should read Wood’s essay, as well as the analysis which includes dialetical gems like this:

What is the best way to look like the smartest person in the room without actually saying anything worth noting? Say that both sides are wrong and that having a strong opinion is for overly passionate losers. This is often mixed with tone-policing and repeated efforts to make sure everyone understands they’re not on anyone’s side. You can’t be on a side in a public debate. That’d mean having an opinion that is potentially not just regurgitating the status quo!

“Both sides” is usually just intellectual cowardice disguised as nuance.

The second post-mortem, by Rebecca Solnit, is no less scathing. Solnit wrote an article called Men Explain Lolita to Me; men were apparently honor-bound to educate Solnit after she picked on Esquire for publishing a list of 80 Books Every Man Should Read. A full 79 of those books were written by men, and Solnit pointed out that this :

It seemed to encourage this narrowness of experience and I was arguing not that everyone should read books by ladies—though shifting the balance matters—but that maybe the whole point of reading is to be able to explore and also transcend your gender (and race and class and nationality and moment in history and age and ability) and experience being others. Saying this upset some men. Many among that curious gender are easy to upset, and when they are upset they don’t know it (see: privelobliviousness). They just think you’re wrong and sometimes also evil.

It’s tempting take the cheap shot, the sarcastic nihilistic poke and say “well, of course. It’s Esquire. This is par for the course.” You could even link to something actually educational about Esquire’s sordid history to prove your point. But that’s still the lazy way out, and Solnit isn’t lazy. This is much better:

Scott Adams wrote last month that we live in a matriarchy because, “access to sex is strictly controlled by the woman.” Meaning that you don’t get to have sex with someone unless they want to have sex with you, which if we say it without any gender pronouns sounds completely reasonable. You don’t get to share someone’s sandwich unless they want to share their sandwich with you, and that’s not a form of oppression either. You probably learned that in kindergarten.

But if you assume that sex with a female body is a right that heterosexual men have, then women are just these crazy illegitimate gatekeepers always trying to get in between you and your rights. Which means you have failed to recognize that women are people, and perhaps that comes from the books and movies you have—and haven’t—been exposed to, as well as the direct inculcation of the people and systems around you. Art matters, and there’s a fair bit of art in which rape is celebrated as a triumph of the will. It’s always ideological, and it makes the world we live in.

Delicious.

Netflix VPN Followup

From Twitter, some questions from Friend of the Blog Miranda regarding my last post on Zone Shifting:

How does one define location? Where are you “located”, for example, if you’re in EU but have credit card with an American address?

And what about a free market argument when you just want to watch something that’s not legally available at that time in that location? Or if it’s not available at all?

Location, location, location

The short answer to the first question is that you’re located in your physical location, and you’re getting that country’s version of Netflix with the stuff Netflix has licensed for that country.

The long answer: every nation sets its own copyright regime with its own copyright law. When you’re in Foreign Countrystan, they decide whether the movie you’re trying to watch has copyright protection or not. That sounds like a terrible idea, and it’s an incredibly terrible idea. In fact, the Western World realized this back in 1886, back when people took like three baths a year.

Zone Shifting as Fair Use

Netflix announced this week that they’re cracking down on the use of VPNs. Among other uses for VPNs, they let users connect to web sites “from” other parts of the world. I’m in New York, but I can use a VPN in Sweden to connect to the Swedish version of Netflix, which has a different selection of TV shows and movies than the American version.

Popehat’s Mark Randazza has some thoughts on Netflix’s announcement:

I frequently log in to my Netflix account from an Italian VPN. I like to watch movies in Italian. I am teaching my kids Italian, and I like them to watch their cartoons in Italian. The same cartoons that are on my Netflix USA account are also available on Netflix Italy. But, for some reason, Netflix does not give me the option to change the language to Italian, as it does if I log in through an IP address in Europe. Netflix could easily offer the same shows with the Italian language option in the USA, but for some reason, they would rather not.

Zone shifting is a legitimate use. I can understand that Netflix would rather not let me access “Better Call Saul,” from my proxy server. They don’t have U.S. distribution rights to it yet, so technically, if I were to access Better Call Saul on that proxy server, I’m violating someone’s rights.

A cursory look on Google seems to indicate that Randazza has coined “Zone Shifting.” I love it.

The Rise of the Machines

Last year, I became fairly obsessed with superintelligent artificial intelligences. I dipped a toe into the Iaian M. Banks Culture series of books, which are science fiction set in a distant future where humanity has created thousands of godlike AIs to fly their ships and terraform their worlds. I do recommend it.

The next book I read was “Superintelligence: Paths, Dangers, Strategies” by the philosopher Nick Bostrom. Bostrom actually gets paid to think (and write nonfiction!) about artificial intelligence, what it might look like, and when it might arrive. We’ve all seen The Terminator and The Matrix, so you get the gist of how scary the “what” could be.

Raffi Khatchadourian, writing in The New Yorker, has a great review of the book and interview with Bostrom. It’s called The Doomsday Invention, and it covers the “when” of AI. Note that expert consensus on AI is that we’re about twenty years away from being able to create it, and that we’ve been twenty years away for about sixty years.

For de­c­ades, researchers, hampered by the limits of their hardware, struggled to get the technique to work well. But, beginning in 2010, the increasing availability of Big Data and cheap, powerful video-­game processors had a dramatic effect on performance. Without any profound theoretical breakthrough, deep learning suddenly offered breathtaking advances. “I have been talking to quite a few contemporaries,” Stuart Russell told me. “Pretty much everyone sees examples of progress they just didn’t expect.” He cited a YouTube clip of a four-legged robot: one of its designers tries to kick it over, but it quickly regains its balance, scrambling with uncanny naturalness. “A problem that had been viewed as very difficult, where progress was slow and incremental, was all of a sudden done. Locomotion: done.”

In an array of fields—speech processing, face recognition, language translation—the approach was ascendant. Researchers working on computer vision had spent years to get systems to identify objects. In almost no time, the deep-learning networks crushed their records. In one common test, using a database called ImageNet, humans identify photographs with a five-per-cent error rate; Google’s network operates at 4.8 per cent. A.I. systems can differentiate a Pembroke Welsh Corgi from a Cardigan Welsh Corgi.

We’re not going to go extinct tomorrow, next year, or in ten years, but machines are getting exponentially smarter every day. It’s exciting, and only a little scary. ​

Hate Search

I’m not usually one for op-eds, but The Rise of Hate Search, by Evan Soltas and Seth Stephens-Davidowitz in the New York Times, is pretty stunning:

There are thousands of searches every year, for example, for “I hate my boss,” “people are annoying” and “I am drunk.” Google searches expressing moods, rather than looking for information, represent a tiny sample of everyone who is actually thinking those thoughts.

There are about 1,600 searches for “I hate my boss” every month in the United States. In a survey of American workers, half of the respondents said that they had left a job because they hated their boss; there are about 150 million workers in America.

In November, there were about 3,600 searches in the United States for “I hate Muslims” and about 2,400 for “kill Muslims.” We suspect these Islamophobic searches represent a similarly tiny fraction of those who had the same thoughts but didn’t drop them into Google.

In 2016, there aren’t a lot of things more personal and intimate than what we search for online. (Relevant XKCD)

Feline Maximization

Here’s an interesting tale of copyright gone weird from Ars Technica. The interminable CBS sitcom The Big Bang Theory is being sued for copyright infringement of a children’s poem called “Soft Kitty”. The poem reads, in its entirety:

warm kitty, soft kitty little ball of fur sleepy kitty, happy kitty purr purr purr

Really? Fifteen words? Three of which are “purr” and four are “kitty?” That has to be some kind of record. There’s no way you can copyright that, right?

Well, yes. You can copyright a haiku. You can copyright surprisingly short things. The only real requirements are that you write it down and that it’s creative. Federal courts have interpreted the creativity requirement to imply some minimum length: you can’t copyright a poem which is one word long. There’s nothing creative about reciting a lone word. But a super long poem doesn’t guarantee copyright either; a list of every word in the English language in alphabetical order isn’t creative. It’s a lousy dictionary.

Fourteen Zeros

Here’s a provocative title from the usually sober Ars Technica: Secret Source Code Pronounces You Guilty As Charged:

Secret code now has infiltrated the criminal justice system. The latest challenge to it concerns a handyman and a convicted sex offender named Martell Chubbs, now accused of a 1977 Long Beach, California murder. Local police were investigating cold cases and arrested Chubbs after DNA taken from the crime scene long ago matched a sample in a national criminal database, the authorities said.

A private company called Sorenson Forensics, testing vaginal swabs from the victim, concluded that the frequency in the profile occurrence in the general population was one in approximately 10,000 for African Americans. The same sample, when examined by Cybergenetics at the company’s Pittsburgh lab, concluded that the DNA match between the vaginal sperm sample and Chubbs is “1.62 quintillion times more probable than a coincidental match to an unrelated Black person,” according to court records.

Okay, both of those sound like slam dunks, right? What’s the problem with the Cybergenetics analysis if Chubbs is screwed either way?

Well, let’s back up a bit. What exactly do those numbers mean? They’re the likelihood of some random person with the same DNA profile as the person who left their DNA at the crime scene. Take Sorenson’s one in ten thousand number. It doesn’t mean there are 10,000 to 1 odds that Chubbs did it: that’s the prosecutor’s fallacy talking. It also doesn’t mean that if there are 20 million black men in America, that there are 20,000 people whose DNA would match the killer’s, so there’s only a 1 in 20,000 chance that Chubbs is the killer. That’s the defense attorney’s fallacy.

Consider the Source

Volkswagen’s diesel cars aren’t nearly as fuel-efficient as the company claimed for the last decade or so. The New York Times talked with Eben Moglen, who’s been evangelizing open-source software for several centuries, and he points out that this scandal could have been discovered before it started if not for closed-source software:

“Software is in everything,” [Moglen] said, citing airplanes, medical devices and cars, much of it proprietary and thus invisible. “We shouldn’t use it for purposes that could conceivably cause harm, like running personal computers, let alone should we use it for things like anti-lock brakes or throttle control in automobiles.” […] “If Volkswagen knew that every customer who buys a vehicle would have a right to read the source code of all the software in the vehicle, they would never even consider the cheat, because the certainty of getting caught would terrify them.”

Moglen’s definitely not wrong, though I wouldn’t hold my breath on the “open-source software in anti-lock brakes” bit. I think the fact that it’s a felony to tinker with your car’s software is absurd, and that it’s impossible to actually regulate the functioning of closed-source software. But Volkswagen didn’t trick the E.P.A. with closed-source software.

From the Times article again:

When the test was done and the car was on the road, the pollution controls shut off automatically, apparently giving the car more pep, better fuel mileage or both, but letting it spew up to 35 times the legal limit of nitrogen oxide. This cheating was not discovered by the E.P.A., which sets emissions standards but tests only 10 to 15 percent of new cars annually, relying instead on “self certification” by auto manufacturers.

Federal regulators and their European counterparts were bamboozled because the car companies were the ones doing the testing. That’s beyond ridiculous. Think of Volkswagen like a student who got ahold of the answer key and spent all night memorizing the answers to the final exam, only to be asked to grade his own test paper and report his grade to the teacher.

What Volkswagen did was pretty awful, but it’s not surprising. If Volkswagen’s engines pollute too much, they don’t get to sell cars in America. That’s objectively a good thing; I rather enjoy that cars are under strict(?) regulations on the amount of poisonous material they can produce. But if you have those kinds of stakes and then let companies grade their own performance, they’re going to cheat. Full stop.

If we’re being honest, the idea that the E.P.A didn’t have the resources to check the math itself is the really insane part. Open-sourcing Volkswagen’s software would have been an instant fix for this, but regardless of whether that happens, the E.P.A should absolutely be able to afford to drive a car around in circles and measure what comes out of the tailpipe.