under the doctrine that software "trust" is needed YOU are the attacker. It's entirely about stripping your control (thus ownership) from the hardware you paid for (see the safetynet shitshow).
There's a second use whereby I somehow bind my own OS hash to my own data encryption key, so nobody who changes the OS can read the data. The technical distinction between this and the previous: if it's designed for the device owner's protection, the device owner can reset the system.
Just because you have root it doesn't mean it's a vulnerability. Can he read the data of other customers? Can he interact with the internal network?
Do you want to know how you can get code execution on microsoft's servers? Easy, go to github and spin up a github action.
The SSRF section does NOT prove SSRF, just because you can make a server interact with attacker supplied urls it doesn't automatically mean it can reach internal things and it does not automatically mean it's exploitable, far from it.
The user location leak is also not a leak since it's fair to assume that the user already knows his own physical location. It'd be interesting if there was a way to reveal the location of other users but alas that isn't mentioned, let alone proved.
> Allowing women to make a selection based on this likelihood means that female customers that are alone can make choices to still use the service while reducing the overall risk.
I'm failing to see how anything you say could be used as a guideline to pick between "good" discrimination and "bad" discrimination.
The major distinction you draw between "Type II" and "Type I" is the fact that one is fueled by "arbitrary aversion" which is not a particularly useful distinction.
What if I denied entry to black people from my bar because ""they commit more crimes"" and ""are more likely to break stuff"", is it morally ok? Why not?
My opinion is that no, it's not ok because the majority of people punished were never going to behave in an uncivil way.
The same logic can be easily applied to this situation. Are men more likely to behave sexually inappropriately (which ranges from verbal harassment to assault)? Sure.
Is it the majority? Hell no, it's nowhere close.
(Of course it's worth nothing that the "majority" does not necessarily have 50.01%, it's just an arbitrary line you can draw as long as you are consistent about it)
The point I took away is that since the normal methods of "ok discrimination" are not available and Uber refuses to do the needful on their behalf, women should be able to "use the big gun".
The reality is that if Uber rapes are an issue, and something like this is not allowed, women will just stop using it entirely.
Or special Uberpods will be developed where the driver is completely encased and the passenger has a "auto drive to police station" button.
If someone is presenting themselves to you in person for entry into your bar, you have far more information to make a judgement on than the color of their skin... so it is not the same.
In the case of a woman coming into contact with some driver and volunteering location information like her home address, she has little to no information to make that judgement. Providing her just that bit of information, and allowing her to discriminate based on it, makes her safer. Ideally, she'd have way more information than just whether the driver is male or female. The reputation information helps, but isn't always reliable.
>If someone is presenting themselves to you in person for entry into your bar, you have far more information to make a judgement on than the color of their skin... so it is not the same.
So the difference between "good" discrimination and "bad" discrimination is the amount of information on which the decision is based upon?
Logically then uber could add a "white only" option, "no queer" and "no leftist".
(of course this is arbitrary but you can easily come up with a reason why: if you split any group of real people in two it's only natural that one group has an higher incidence of a negative trait)
This also has a second problem: what if we let the passenger know not only the sex but also if the driver ate fish in the morning (and hundreds of other useless facts)? Does that make it discrimination because they have far more information?
I guess not but then how do you decide what information is valuable in order to decide if there is enough information to judge the individual instead of going off statistics?
How can you say that our theoretical racist patron is in fact racist and not going off the only valuable information?
No, that's a question.
I imagine it's not that since the rest of my comment is dedicated to pointing out how that'd be racist.
I was trying to make you explain what exactly the difference is since you didn't clearly define it in your reply.
Uber is also the one deciding to offer a rideshare service where mens are banned for working for them. Uber has the choice between vetting their employees and doing discrimination based on a correlated proxy. They choose the latter, and this discussion is about whether that is legal.
I don't need fraud protection and the ability to charge back when lunch at a restaurant, I just need a skimmer-resistant payment method (which a phone is).
>If you can bypass paywalls by using google's cache feature
that is quite different. Google serves (used to serve) to its users whatever the website presents to its crawler, it does not try to avoid paywalls or interact with the website in any capacity other than requesting information
how did grok gain access to this supposedly private information? Did it pilfer her private emails? Did it hack the producer's website and gained access to confidential files? Did it look through her computer?
Look, I hate grok just as much as the next person but if it was just crawled then by definition it is not private.
You may very well argue that people are harassing her (and that it's not ok), you may even argue that AI should not facilitate such harassment but to call publicly available information private is mental gymnastic.
> how did grok gain access to this supposedly private information? Did it pilfer her private emails? Did it hack the producer's website and gained access to confidential files? Did it look through her computer?
Or just read the Instagram and Facebook pages for this stage name. This "private" info is right there.
yes, it's quite similar.
They blocked some lawful services too such as google drive (yes, really) and a TON of sites behind cloudflare by blocking some of its IPs (it happened a while ago, it's not directly related to this).
I find extremely funny that I came across this spammy comment while sitting on a vulnerability in their code because my attempts of contacting them have been unsuccessful
reply