- they now acknowledge that it wasn't unlawful for judyrecords to access (and republish) the records that were unlawfully published
- they talk about judyrecords using a 'unique method' to access the records; I'm guessing the method is something simple like 'incrementing an integer', and that they are trying to make it seem more mystical.
That is true, but the court also expressed skepticism on the CFAA interpretation- but since the jurisdiction was wrong, they didn't need to make a ruling on the CFAA.
I am unaware of any case going to trial using that interpretation of the CFAA since, though.
It's not the incrementing the integer that was the problem. It was the knowingly accessing records that you know you shouldn't have access to. If judyrecords had a method of crawling pages that involves incrementing integers, that isn't a problem on its own. If they were using that method with a good faith belief that they were accessing data they had the right to access, that's fact is what makes it an entirely different scenario than what happened with weev.
Agreed, however if a government official says “These records are confidential, please don’t look at them” and then they leave them on a park bench in front of you, then it is a felony to for you to look at them.
In this case, it would be like going through the UI and trying to access the record and getting denied because of a client side access block, so you make a direct call to the backend instead to retrieve the record. You’re making a perfectly legitimate HTTP request but for something you know you shouldn’t be able to access: illegal.
Obviously. But I was responding to a particular argument. Why is "knowingly accessing records that you know you shouldn't have access to" so different in this case? There's no trespassing or equivalent; the computer itself was set up for public access.
And "protected computer" is basically all computers. Imagine if the records were on a kindle; that shouldn't change the legality and if the CFAA does so that's a bad thing.
A phone is a personal device and I'd like that to be treated differently in many ways.
A public web server doesn't have such direct privacy issues.
When it comes to records on a bench vs. a non-personal kindle on a bench, I think they should have equal and low protection. Abusing the data should face penalties, but not poking around.
Why should you have any right to “poke around” a kindle you find on a park bench? Beyond any general rights you might have permitting you to take ownership of lost property, that is.
I don’t see any obvious reason as to why this should be allowed, but it’s trivial to come up with a whole plethora of reasons for why you shouldn’t be allowed to poke around such devices.
Most states use the MPC's classification for various mentes reae. The MPC organizes and defines culpable states of mind into four hierarchical categories:
1. acting purposely - the defendant had an underlying conscious object to act
2. acting knowingly - the defendant is practically certain that the conduct will cause a particular result
3. acting recklessly - The defendant consciously disregarded a substantial and unjustified risk
4. acting negligently - The defendant was not aware of the risk, but should have been aware of the risk
Thus, a crime committed purposefully would carry a more severe punishment than if the offender acted knowingly, recklessly, or negligently. The MPC greatly impacted the criminal codes of a number of states and continues to be influential in furthering discourse on mens rea.
Some have expanded the MPC classification to include a fifth state of mind: "strict liability." Strict liability crimes do not require a guilty state of mind.
What does "shouldn't have access to" mean? The web server has a permissions system that determines what access level to grant in response to any request. If you are granted access to a valid request then what other interpretation can there be apart from the server decided you "should have" access?
I remember an old HN comment about this, but can’t find it right now. It went something like this (but obviously worded far more eloquently):
Hackers love to think that they’re captain Kirk outsmarting the computer, but real life isn’t Star Trek and judges are very much humans and don’t look kindly on such stunts.
A reasonable person would know that you aren’t authorized to dump AT&Ts customer database by incrementing an integer on their site.
A reasonable person wouldn't build a house with no walls to store their secrets in, and then put the house in a public place and give access to the public.
Or I guess I can just start up a website at "youre-unauthorized.com", so a every reasonable person is duly noticed that they aren't authorized to see it, put my secrets there, set the web server to allow access to all requests everywhere, and then file a criminal complaint on everyone who accesses my secrets that I put out in public.
A reasonable person knows intuitively that only crime committed was that of embarrassing the rich and/or well connected.
> A reasonable person wouldn't build a house with no walls to store their secrets in, and then put the house in a public place and give access to the public.
That's obviously true under some formulations, but it doesn't matter, because they won't be on trial. The person who performed the unauthorized access will be.
> A reasonable person knows intuitively that only crime committed was that of embarrassing the rich and/or well connected.
I consider myself a reasonable person and I'm perfectly happy to have unauthorized access punishable under the law. I value the fact that society takes an onion-like approach to information security. There are incentives for private organizations to secure data, but when they fail to, the risk of criminal sanctions probably prevent some breaches that would otherwise occur. I also do not value the ability to look at computer systems on an unauthorized basis -- i.e. I do not think it brings any value to society -- so by my lights, I lose nothing by it being illegal.
> I also do not value the ability to look at computer systems on an unauthorized basis -- i.e. I do not think it brings any value to society -- so by my lights, I lose nothing by it being illegal.
Not only that, but despite what views borderline-ASD hackers might hold, courts do make decisions about vague things like “intent” on a daily basis.
> A reasonable person wouldn't build a house with no walls to store their secrets in, and then put the house in a public place and give access to the public.
A reasonable person might fail to properly lock their door. Try that defense in front of a judge, odds are you’ll end up in prison.
Does a reasonable person still have an expectation of privacy if not only does he leave the door open, he sits by while a supposed intruder walks in and out not once, not twice, but one hundred thousand times (in addition to unknown numbers of other intruders multiplied by unknown numbers of more times)? Not only was the organization so derelict in their affairs that they failed to protect sensitive customer data, they didn't even notice the "crime" had taken place (a hundred thousand times), and in fact, would never have noticed, and would prefer not to have noticed, had they not been forced by the threat of public disclosure.
Nobody is sitting by these servers, looking at packets as they pass by.
> Not only was the organization so derelict in their affairs that they failed to protect sensitive customer data, they didn't even notice the "crime" had taken place
None of this would reduce the intruders liability. Perhaps the company should be tried separately for their failure to protect customer data, but that’s a different issue.
It means the intent of the person who created or operates the system. If I forget to lock my front door that doesn’t mean you should have access to my house. Appropriately, it does at least mean you can’t be accused of breaking into it, so the analogy holds up fairly well.
It means that the stakeholders of the system, normally its owners, do not mean for you to have access. Here's one way you can estimate whether the owners intend that you should have access:
Imagine an in-person conversation with the owner or controller of the data, or their most knowledgeable representative. If you asked them verbally whether you may access the data, and they said "no," then you "shouldn't" have access to the data.
> If you are granted access to a valid request then what other interpretation can there be . . .
See above. This is also the interpretation that will be relevant in court if you are sued or arrested, so mark it carefully.
This is like reading a article without looking at the table of contents. If something shouldn't be part of the article, you don't hide it by removing it from the table of contents, and you can't blame me for paging through the article page-by-page and therefore getting pages not in the table of contents.
Huge difference between this and weevs case. weev did what he did knowingly and maliciously.
Intent matters, weev knew his access wasn’t authorized. Judyrecords didn’t know they were accessing non-public data by incrementing the integer, weev did.
IMO the standard should consider both industry best practice for securing sensitive data in proportion to the sensitivity, as well as intent. Like it's harder to argue breaking and entering if you secure a door with a bent clothes hangar or carabiner vs a padlock, even a bad one. (Clarifications welcome.) Incrementing integers are like the carabiner, a botched JWT token implementation might be like a bad padlock, etc.
The attorney discipline records were not supposed to be public, but, inadvertently were accessible in this one public database of hundreds indexed by Judyrecords. No one noticed until it became easily searchable. Once notified all parties, including Judyrecords, moved swiftly to remove the records from public access.
The public is made aware, I believe this leak included investigations where it was deemed no discipline was necessary. As one example, here’s what Michael Avenatti’s bar entry looks like:
Honestly I’m surprised they admitted this. Historically companies and organizations would always blame whatever the current hacking buzzword is. I recall APT being an especially annoying example of this, until that buzzword disappeared and then suddenly everyone’s hackers stopped being APT.
I have decidedly mixed feelings about the legal profession, but most lawyers (especially the "establishment" types most likely to be involved in the CA bar) are *very* unlikely to make deliberately false statements (or to fail to correct a past statement, one they learn it was false).
In defiance of thousands of lawyer jokes, in my experience lawyers lie the *least* of any large group of humans I've encountered – at least if "lying" refers only to the specific denotation of the words, since lawyers also love to split hairs to say something that's technically true. But the distinction between a "hack" and a "database error" is _exactly_ the sort of hair that lawyers love to split!
I appreciate that they are not memorializing this as a “hack.” That’s a mature stance. However:
Sarcasm alert: so the gist is engineering is hard and we should never assume actors within a system will act predictably: you don’t say!
Security is hard, but necessary, it’s never convenient. It takes relentless discipline and suspicion, which is dissonant in conventional software team dynamics.
I hear it time and time again, “this isn’t an issue”
Every time a story comes out like this, I think about how ill-equipped these organizations must be in mitigating these breaches.
I don’t pay very much attention to prosecution of cases under the CFAA, but at a surface level this reminded me of the weev / AT&T case [1], where he was convicted for (afaik) incrementing an id and fetching all the records. I didn’t remember the outcome, but it looks like it was overturned later without taking a solid stance on the technique [2]
I’d definitely believe other cases have happened in the last 8 years that have done a better job of clarifying the law, but I’m not aware of them.
It sounds like A01- broken access control. Simply that judyrecords was marching through record ids and the backend was not authenticating that access. The backend I think they said is a relational database.
This is the most common security issue right now according to OWASP. You have to understand these issues come the fact that likely lawyers oversaw this project and they picked contractors that were "best value" (low cost).
That’s kindof a tautology. Technically speaking, all hacks requires some kind of flaw, whether intentional (backdoor) or unintentional (bug); You can’t hack something if there’s nothing to exploit.
https://www.calbar.ca.gov/About-Us/News/Data-Breach-Updates
Notably:
- they now acknowledge that it wasn't unlawful for judyrecords to access (and republish) the records that were unlawfully published
- they talk about judyrecords using a 'unique method' to access the records; I'm guessing the method is something simple like 'incrementing an integer', and that they are trying to make it seem more mystical.