This is also a good example of the benefit of telemetry: that they have crash numbers coming back from the field lets them tell that this really did work in practice and get a sense of how much of the problem they've solved.
opt-in telemetry is effectively the same as no telemetry.
if you have (or anyone has) a problem with crash statistics being tracked via telemetry then I have absolutely zero idea how to convince you that it's a good idea that this blog post doesn't already clearly state.
it's the same with OS updates; people (generally) simply will not perform system or security updates unless they are forced, because everyone thinks they are smarter than the "script kiddies" who would use an attack against them. the user thinks they would see an attack coming and avoid it. in short, they don't. viruses spread, the US Congress calls Microsoft in and asks why the systems weren't patched, and Microsoft says "the users are responsible for patching" and Congress doesn't like it.
so now we are where we are. OS updates are forced after a time, and telemetry is not only the norm, but a very good idea for applications in use by millions of people, like Firefox.
> everyone thinks they are smarter than the "script kiddies"
No. People do not perform system updates because a) it's a chore and b) it get's in the way or even breaks things. To do an update I need to agree to give up control over my device for some time (often undetermined) and then risk that updated code causes issues (it's not uncommon). We need to design apps and operating systems with seamless and reliable updates in mind, not force people to suffer.
Or maybe Microsoft finds it's a cheaper and easier to leverage their monopoly position if they operate in "perpetual beta" by rushing out new features before competition can get a foothold, then using their (paying) users as their testers. Rather than, say, testing and hardening their products sufficiently before launch.
> The tone you employ - "users" having "a problem", needing to be "convinced" and "forced" - doesn't help.
My tone here is borne out of users shooting themselves in the foot and then complaining about the pain and inability to walk. At every opportunity that I have taken to give computer users the choice to do the thing that is good for them, the overwhelming majority have failed to make that choice. The people who visit this site are mostly not that kind of person, though there are plenty here who are.
We have shown Microsoft that we simply will not update our operating system or even reboot unless forced. Many, many times we have made this clear, even though patching is overwhelmingly a net positive for both MS and its users, generally speaking, users simply won't do it. They just won't. History bears this out.
Updating is a short-term inconvenience in exchange for long-term security and stability, and people do not think about those things logically. The importance of the immediate future is amplified by a large factor, and the importance of the future is attenuated by a large factor, in most people, especially when it comes to people who view their computer as a tool. Sitting in front of a computer is indicative of a user wanting to complete a task, and manual updates impede the ability to perform that task. That makes installing updates and stopping work to reboot a non-starter for those people. They just won't do it.
I don't know how else to say it. It's not a matter of tone so much as it is a matter of fact.
Thanks for your thoughtful reply, I hope my additional remarks are
taken as sincere and not in any way personal, except in a good way.
> My tone here is borne out of users shooting themselves in the foot
Frustration. I hear you. Sure, it's frustrating that they exercise
choice (however misguided you see that) and then "complain". It's very
nice if you've written code, and even nicer that you care for your
customers. But, as developers, they aren't our children. I've been
there and it's galling, and feels like a rejection, but to accept what
is unrequited is sometimes harder than giving it.
> the choice to do the thing that is good for them,
This is the elitists' dilemma. Please don't be insulted by that word,
I'm using it literally and appropriately without value judgement (I am
foremost an elitist, and secondarily a peoples' champion, and it is a
position that can only ride on a measure of arrogance - which must be
tempered)
The fact is, it's not your computer. And that really is the long and
short. One must respect that if "users" do not wish to take advantage
of bug fixes more speedily available through telemetry, then it's
their choice to have suboptimal, buggy programs.
In other news, our children will listen to shit music and get into
drugs and relationships we disapprove of etc.
> We have shown Microsoft that we simply will not update our operating
system or even reboot unless forced.
For very good reasons. Microsoft have shown themselves to be utterly
untrustworthy. I really don't think that's even debatable. And it's a
shitshow because I do not believe trust can ever be repaired. It
leaves the reality that one of the biggest vendors on the planet is in
the position of forcing users because it has squandered the
reputation necessary to do good-faith business, to propagate its
updates. That's tragic because they probably see no way out except
doubling down on abuse, authoritarianism and beating users to their
will - and ultimately that confrontation will be the end of so much we
have built.
What makes this worse is that security is about more than personal
choice (think vaccinations). In other words the damage that Microsoft
(and other big-tech abusers) have done goes far beyond simply
destroying the individual trust relations with their
customers. They've corroded the social fabric of trust in computer
security at a more general level - a cost that is incalculable.
> people do not think about those things logically.
You are right. And we should not assume that they should. Emotion is a
powerful reasoning tool, and only a fool ignores that force of
psychology. Once burned twice shy - and we as developers have been
burning a lot of peoples' fingers these past 30 years.
> I don't know how else to say it. It's not a matter of tone so much
as it is a matter of fact.
I see it means a lot to you. That is a good thing in itself. You care,
which is x10 above the norm.
But we cannot force them to be what we wish them to be. Especially not
"for their own good" which is where all tyranny begins. We cannot
force people to adopt products, customs, behaviours, sing the party
line, or any of that hegemonic nonsense without invoking an age of
"consumer communism".
It saddens me that in 2022 we still need to address the patrician
attitude. It's not the way forward. It's sad to see such a
deterioration. But so long as companies like Microsoft persist a
culture of smug superiority, cavalier conceit and intransigent
disrespect to the dignity of their users we will have to accept "fuck
you" decision making. And frankly, more power to those courageous
enough to say it.
> it's the same with OS updates; people (generally) simply will not perform system or security updates unless they are forced, because everyone thinks they are smarter than the "script kiddies
Updates were not the default. And when they became almost mandatory Microsoft started bundling "features" with security updates.
That's when people started to disable this "feature".
> so now we are where we are. OS updates are forced after a time, and telemetry is not only the norm, but a very good idea for applications in use by millions of people, like Firefox.
And this doesn't change anything. Ransomware attacks are still the norm.
> That's an indication that people don't want this.
It's an indication that optional steps which do not immediately benefit the users of the software will not be taken.
The benefits to telemetry are longer-term, and because opting in is not required for the software to function, the vast majority of users simply will not do it.
The thought to turn it on will likely not even enter their mind. Why would it? The software works fine.
Opt-in telemetry was tried by just about everyone that collects telemetry today. Lots of people say they will turn it on, and then never do. Telemetry is used to make better software. I'm sure there are companies that use it for [insert activity that any person might perceive as bad] and I would argue that those companies would likely not allow you to opt out.
If people understood the kinds of things that are collected, at least the things I collect in the software I write for work, I can't imagine anyone having a problem with it, but there's a lot of things that people do which make no sense to me at all, so I'm not really in a position to be authoritative.
I do know what happened in the late 1990s and the early 2000s though, and I know those things are a large part of why telemetry and forced updates are things which exist today.
Opt-out telemetry opens the door to abuse and is invasive. We can see this happening today already, and that the current trend for opt-out telemetry + forced update is not a reaction to "opt-in is useless".
Regardless of how strongly you feel about telemetry and its perceived "benefits", the choice should always rest in the hands of the individuals using the software first and foremost. Your customers should always be informed of their choice and if they feel like they are willing to participate, they can opt-in.
How would you feel if building architects decided to install a camera in your bathroom in order to analyze how you use your toilet and the shower to assist in improving future constructions?
> Lots of people say they will turn it on, and then never do.
Because most of the time, there is no benefit. The software is already made, and further development is rarely informed by telemetry data. Furthermore, customers are rarely informed exactly what data is being collected - and not given the opportunity or benefit to inspect the contents of any telemetry or crash logs that need to be sent.
> If people understood the kinds of things that are collected, at least the things I collect in the software I write for work
But they don't. And no one takes the time to educate or inform them or give them the choice to opt-in with a detailed disclosure of what is being shared, rather than having to opt-out. As people become more aware of these telemetry practices, you're going to see a wider backlash at the kind of unnecessary data that are being collected. Maybe you're not doing it - but others are.
I believe that absolute vast majority of telemetry is simply ignored. And I also believe it is very common that there are several individuals in most companies that couldn't care less about what is included in the telemetry.
A lot has changed since the late 90s, it really isn't good argument for telemetry nor forced updates that things were bad then. Things would have been absolutely awful in the late 90s even if you had perfect telemetry and instant updates that somehow didn't even need internet.
I don't see any real arguments for either in your posts.
Agreed. There is a pop-up asking if you want to help at install(FF & TB). Big data has earned a reputation for just cause. If the Mozillas want to differentiate themselves to get that trust, aka opt-in, I encourage them. So far, "No, thanks".
There might be a correlation between people who push their memory consumption to the limit and people who never opt into anything they don't directly benefit from, thinking that such optional stuff would hinder them from using all of their memory for personal benefit. OTOH, those people might realize that telemetry will eventually increase efficiency, allowing them to do more with their available memory...
The only telemetry you get will be from users that are experiencing a severe problem who also have the technical know-how to turn it on. You will have no idea how prevalent the problem is amongst the general user base. There would be no way to prioritize debugging the issues.
I was under the impression that Firefox was written in Rust. Doesn't this eliminate crashes? Rust is a safe language after all. There should not be any crash logs with Rust.
Is this satire? Rust eliminates memory errors such as use-after-free. Unlike C and C++, `myVec[usize::MAX]` will `panic!` if `myVec` isn't actually that large.
Rust has a `panic!()` macro which will hard-terminate the program and log a stack trace in what is functionally equivalent to a crash. It can be called in various scenarios... including out-of-memory situations (like the ones being addressed in the fine article and this thread).
And even if Firefox was written in 100% Rust, Rust can still panic for logic errors. And system libraries have their own bugs and crashes that can bring down Firefox.
I would like Linux distributions to ship a system wide telemetry service that can be enabled / disabled at the installation time or anytime later on.
This service would be guaranteed to be unidirectional, would store data publicly on non-profit-run servers and domains and fully comply with GDPR (by not storing any PII and ano/pseudonymising everything).
Developers would connect to this service over dbus and consume the uploaded data in daily batches.
Hosting and hardware fees would come from donations by distributions and other organizations distributing money to the FLOSS ecosystem.
> I would like Linux distributions to ship a system wide telemetry service that can be enabled / disabled at the installation time or anytime later on.
There's nothing stopping a person from creating that. You'd package it up and get it added to the Debian, Ubuntu, RedHat, etc. repos and people would be able to install and use it. That's about as close as you'll get to having it generally available for all Linux distros.
Personally I don't see the value, and think it's invasive, so I would never install it, but people who wanted it would be able to use it.
I think what they meant is to configure proxying all possible telemetry via this service, and enforce anonymisation. Sounds good imho. I'm trying to disable telemetry but it's always a losing game, each version adds something new to each app.
The telemetry proxy service would need packages for each distro, including scripts to work with systemd and init, and maybe a "libtelemetry" package to make using the service easier.
The way I see it working is that if the system service isn't installed, isn't running, or has remote telemetry turned off then the commands for sending telemetry will succeed but send the data to /dev/null. Otherwise the data gets anonymized and uploaded to the user's configured telemetry host.
It could almost be built on syslog, now that I think about it more, but that would be terrible.
The thing I like about popcon is it's opt-in, and disabled by default.
I enable it on my personal production systems and disable on everything else, both on privacy grounds (work), and not providing wrong data (disposable VMs).
I do not think a centralized "system wide telemetry" service is a good idea. This has a huge potential for abuse and can be extended to collect other things.
Privacy should be privacy by default, and if you want users to send you crash/usage logs then you need to show them all of the dirty details, let them review it and chose whether or not to send.
Data also needs to not be shipped to a third-party (e.g. Google) to be correlated with other activity outside the app sending the telemetry. There's likely lots of data going to Google Analytics that the software/service owner never looks at, but Google uses for their own purposes.
Couldn't crash reports be separated from other telemetry data, possibly with a dialog letting the user whether to send a crash report or not? IIRC, the dialog used to actually exist in older Firefox versions. I find the amount of data they collect[1] to be borderline creepy.
The crash reports at https://crash-stats.mozilla.org are a separate opt in bit of telemetry which is a dialog that is shown when Firefox crashes. You can opt into automatically sending them by setting browser.crashReports.unsubmittedCheck.autoSubmit2 to true. It can be true if you opted into a dialog about submitting unsubmitted crash reports.
Usage times, usage intensity, list of all extensions, country of origin. I don't understand why they'd need those to improve Firefox.
Next thing you know they might try to increase engagement time like they're some sort of social network. "Unlock the new exclusive colorway by logging in 30 days in a row." seems like something that could be implemented, seeing how they're time limited already.
(I work for Mozilla, though far from where decisions about telemetry would be made.)
Usage times and intensity are of high value when trying to improve market share. People who barely use the browser are at high risk of stopping use altogether. (For example, they might use multiple browsers, but most of their activity occurs on another and if they figure out multiple profiles or something, they'll leave altogether.) You can't do an A/B test to see what improves usage intensity if you don't measure usage intensity. Also, it's far from PII. And making it opt-in would make the stats useless; people who explicitly choose to allow telemetry are going to have vastly different usage patterns than the bulk of people who do not so choose.
Extensions are very important for crash reports. Far less than they used to be; many crashes could only happen when an extension did something specific. Extensions are now sandboxed enough that this isn't nearly as common, but if a crash signature has a high correlation with a particular extension, it can easily turn a non-actionable bug into something actionable.
Extensions for general telemetry are iffier. The info is fairly high value for things like understanding how people are using the browser and what features are popular or missing. But rare extensions also provide a lot of fingerprinting info. It's important to keep those metrics away from PII, and recorded independently so they can't be correlated.
Country of origin is pretty clearly useful. Mozilla has to allocate resources across countries, including marketing resources, but I would think it's really product management where it matters most. Users gain a lot of benefit from the browser adapting to different markets. (Screenshots have a wildly different importance in countries with Asian writing systems; Europe and especially Germany take privacy much more seriously.)
> Next thing you know they might try to increase engagement time like they're some sort of social network. "Unlock the new exclusive colorway by logging in 30 days in a row." seems like something that could be implemented, seeing how they're time limited already.
Heh. I do not want to predict what our marketing people will or won't do. I have mixed feelings about quite a few things. I'm not happy about ads appearing anywhere in the interface. But I'm also not happy about being dependent on Google ad money.
> And making it opt-in would make the stats useless; people who explicitly choose to allow telemetry are going to have vastly different usage patterns than the bulk of people who do not so choose.
First, it's a small percentage. Or believed to be a small percentage. There are some larger buckets (eg distributions that disable telemetry), but it's possible to do very rough estimates of the sizes of those through other means. And anyway, the vast majority of users are on Windows.
Second, it's fair: if you disable telemetry, you're choosing to not be considered in any telemetry-backed decision making. If you want to still be considered, then it's up to you to make your opinions heard in some other way. (Filing bugs or https://connect.mozilla.org or discussions in places like here, though note that the latter is mostly useless. Not many Mozilla people read this forum or take what is said here very seriously. And even if they do people will be vigorously arguing both sides so it's easy to pick the side you already agree with.)
There's nothing wrong with disabling telemetry. I respect the decision, and I'd certainly rather have people using Firefox with telemetry disabled than have those people not use Firefox. But it's your browser, and even the social contract by which you're using it doesn't say you owe us telemetry data.
I get the saying. But in this case, Mozilla gets most of its money from search royalties, primarily from Google. We are Google's product, not Firefox's.
It's both. Firefox sell you to Google, Google sell you to advertisers.
It seems Firefox executives take a large chunk of money from Google; presumably to make sure Firefox doesn't do anything too wild that would reduce Googles income.
People use it purely for that reason. If they don't, their usage will drop, which will reduce their revenue. The fact that this revenue comes from Google is largely a byproduct.
Its not a perfect system, but it should somewhat work.
I'm not sure - I guess the same motivation that supports the open source movement. I am satisfied that the actions Firefox has taken so far does tend to support my interests.
But many of us who used it from the start think it was a much better browser before. For a long so much was sacrificed for next to no improvement.
For me it was more or less rock solid at >800 tabs and with a lot less memory and more exciting extensions than I have now.
I admit this wasn't everyones experience, but as a superuser tool it has degraded a lot over the years.
That said it is still the best browser for me: I don't think anyone else except Orion (which is Mac only) has actual tree style tabs (not to be confused with vertical, non indented tabs as seen in Opera derivatives).
> presumably to make sure Firefox doesn't do anything too wild
I wonder how much of what the Tor Browser version of Firefox does, would be upstreamed to Firefox proper, if not for that deal?
(I wonder how much the Firefox team considers Tor Browser to be "the real Firefox" / "Firefox the way we intended", with Firefox itself just being "the sell-out version of Firefox"?)
Purely technical telemetry like this is indeed useful. The problem comes when telemetry is used to justify deleting useful features such as compact mode.
The "make it hard to find" to "nobody uses it" to "let's delete it" pipeline is very real. Reminds me of the "defund it" -> "it does not work" -> "let's privatize it" pipeline in right-wing governments.
My personal objective with most situations is to discourage other people from enabling telemetry and then enabling it myself.
As a larger piece of the visible audience, I then hope that more attention is given me. This is especially important for open source projects. And I don't care that much about what the company is getting from me.
But listen, they collect all sorts of stuff and you should disable it unless you understand it. Ideally, privacy laws expand to the point where you need to email a signature saying you understand before you opt in to telemetry. Informed consent is required for any reasonable study.
Web browsers are definitely way measurably better and more stable than they've been basically since the beginning. We went from buffer overflows in the URL parser, dog slow JS interpreters, and unclear specs that are implemented completely differently in every engine, to today, where browsers are some of the finest engineering in the whole field, featuring state of the art JITs, extremely precise specifications, and bug bounties blowing past six figures once you get to code execution.
Of course you can do something ridiculous and compare today's Firefox with NCSA Mosaic or something. And yeah, old-school HTML would still be perfectly usable if people wanted to do that, so it's not like this is entirely pointless. But if you even make the comparison slightly more reasonable, like comparing Chrome and Firefox today to their versions from 10 years ago, I mean it's really no contest. From the standpoint of the core functionality, these are better browsers that are faster and generally more efficient. It's not just because computers got faster; that enabled more complicated things of course, but I have several older machines still new enough to run some modern software (Pentium 2-4) and generally, the newer browsers just run significantly better when doing equivalent things. They handle load better, they generally load pages better, scrolling generally feels better... Even on crappy single core processors. Even crappy sites like YouTube can work pretty decently on a Pentium 4.
This is not really that surprising because it's not like they haven't been optimized for this. Even if phones are starting to get very fast, there's still plenty of lower end phones on the market where it's important to be able to deal with slow processors with few cores.
Chromium is often my go-to codebase when I want to see how to do something correctly, especially with weirder OS APIs. Perhaps somewhat fittingly, I used to often use Qt for this, in a different world.
> Web browsers are definitely way measurably better
Just recently Firefox did not let me view a website even though the server was up because the certificate was expired and the site used HSTS previously. No override provided to me as a user. Better? No.
So an older browser that doesn't support HSTS is good, because it means you can browse a page even when your browser has valid reason to believe a downgrade attack could be occurring? The nuances of secure connections can be pretty awful, but a vast majority of the time, HSTS is good. The fact that HSTS is ubiquitous makes it less likely that attackers will even try attacks like these.
If a website admin messes up TLS when using HSTS, that's unfortunate. But: they opted to use it on purpose. It's hardly the browser's fault for trusting the website that it's not OK to browse in this circumstance.
Or they didn't - e.g. the entire .dev TLD is HSTS preloaded or it could have been a previous domain owner. Or they were blindly following up a guide.
But even if they mean to have HSTS, mistakes do happen and the browser should not prevent me (the user) from working around them.
Security without any other considerations is not reasonable - and if you want that you might as well prevent all connections to avoid all 0days.
Ultimately the browser is not in a position to fully judge the threat model and thus should allow the user to override its guess - always. For example when I want to look at a blog of funny pictures without any login info I don't care if someone tries to MITM that connection. And unless you are in a country with shitty consumer protections, an MITM is already so unlikely to be a conspiracy-theory level concern.
> So an older browser that doesn't support HSTS is good, because it means you can browse a page even when your browser has valid reason to believe a downgrade attack could be occurring?
Well I mean false positives are bad because they are false. That much doesn't require further justification or someone to embrace false negatives instead or whatever. This policy of "treat everyone as stupid and gullible and ditch them if they won't upgrade" makes sense for giant tech companies, but not necessarily for everyone. Some of us have to be able to work with old technology.
It's not a false positive because it's broken, though. It's a false positive because it's working as intended and the host is simply violating the rules. It's weird that a site would opt-in to a feature like this, use it incorrectly, and then when the browser correctly rejects it, you would get mad at the browser. Nobody was actually forced to use HSTS here, and there's also no good reason for a TLS certificate to be expired either; in production, this is an incident no matter what.
The browser really isn't treating you as stupid, it's telling you "this is a serious security issue, if you really want to bypass this, you're on your own." You absolutely can, using flags in chromium or config in Firefox, or sometimes by clearing the HSTS cache in either. The benefit of this is that it ensures users who don't know better, the majority, don't stumble into an attack in the most critical situations, and it as well makes it significantly harder for developers and malicious attackers alike to try to convince end users to wrongly bypass security features, a problem that plagued early web browsers which had much worse UX around TLS. Even though it can be annoying, it's helpful to all of us, because the security posture of those around you naturally impact your own security posture, too.
This is all especially reasonable because HSTS is opt-in from the host's perspective. You're supposed to use it when you'd absolutely rather have false positives than not catch an attack.
This particular point doesn't have much to do with old technology, but I honestly don't think most developers set out to just break old tech. I agree that it is a shame the degree of churn we go through, but even if you have a super valid reason to absolutely need to use old technology, it's still not a good argument for the rest of the world to hold off on improving security, privacy and performance by holding back TLS upgrades or continuing to include and debug polyfills for all of eternity. If you really absolutely can't make TLS work for you, nothing is stopping you from running an SSL stripping proxy in the middle. Works pretty well for me.
Hopefully in the future the churn of technology will slow down and computers will last longer, but we're literally still near the beginning of the computing revolution, and the computers from 20 years ago are probably a much more enormous delta from today than the computers 20 years from today will be. (And even if a breakthrough proves this untrue, it still seems unlikely that today's boxes will become useless, with how much compute they pack.) And yet despite that, Linux is still dutifully supporting processors as old as 486, even though it's not really that important to be running the latest kernel on a machine that old. That's pretty good, and even if browser updates are difficult on machines that old, I have little doubt that some people will be maintaining them all the way to the 2038 problem where it will get much harder.
Chrome will do the same thing and so will any other browser that honors the HSTS standard in RFC 6797. You probably want to direct your ire at the owner of the website, setting the HSTS header is a positive affirmation by the server that it does not accept insecure connections.
Special attention should be paid to section 12.1 of that RFC:
>If a web application issues an HSTS Policy, then it is
implicitly opting into the "no user recourse" approach, whereby all
certificate errors or warnings cause a connection termination, with
no chance to "fool" users into making the wrong decision and
compromising themselves.
This reminds me of Microsoft having a strong incentive to make sure hardware manufacturers write their drivers correctly, because if a bad driver causes problems the average user will blame Microsoft, as they don't know that MS don't write the drivers.
In this situation Firefox are being blamed for correctly implementing the standard and preventing access to the site, when the blame should fall on the site owner for not setting the security up correctly.
The html spec also defines that audio can be set to autoplay and the JS spec allows websites to freely open popups. Yet browsers are able to ignore that in the interest of the user - because they are (supposed to be) user agents not website agents that have to blindly follow what the website says. Browsers are free to not implement user-hostile specs or only implement parts of them and they commonly chose to do so.
Ultimately, protocol specs have no purview over user interaction and can at best recommend the expected behavior. There is nothing wrong with the browser telling the user that this website is expected to be accessed trough TLS with a valid certificate and isn't and to redirect HTTP requests to HTTPS but the spec is no excuse for the browser not letting the user overwrite that policy.
So no, the ire should be directed at the browser because the broser is there to make sure my interest as the user are followed not those of some website owner (or someone making decisions for them like Google for *.dev).
Neither the HTML spec or the JavaScript spec require that audio be auto played or that pop-ups be opened without restriction. HSTS is different in this respect.
I'm sure if you want to hack your browser so that it ignores that header you can, but the idea is that any server sending that header is telling you to go away if the certificate is invalid.
HSTS can be annoying when such things happen but from what I understand the browser is acting properly there. Also I've been able to clear that in Firefox in the past, I think I had to clear all data for the site. I do think that an easier mechanism could be a good thing.
By selling that software to user once, then only improving it a year or two later based on any quirks that got enough people mad enough to actually send an email to complain about it. If it’s too critical, then there goes a quarter of your user base.
> If it's too critical, wouldn't they send an email about it?
Usually no. I very rarely email software providers to complain about their software. I'll usually just uninstall.
Some software has more of a relationship with users than others. Doing support for B2B SaaS, I knew many of the users by name, and they knew me. Even then, though, some people would just live with terrible bugs for a long time.
We once had an issue where the initial page post login would take 30+ seconds to load because it synchronously calculated a bunch of their stats. I don't recall anyone writing in to complain, and when I fixed it, I don't think they said anything. (The slowdown was a boiled frog situation, since it got slower as they grew.)
Because it was a website, we could see the load time, and I saw it and fixed it, but if it were desktop software, the only way we'd have known would have been to load up a test account with hundreds of thousands of fake users and orders and fake shipments, and gone through the process of using the fake data in a real way.
Load testing is obviously good, but it will never beat being able to see how it performs for real users, and just hoping those real users will report back to you is completely unrealistic.
Don't get me wrong: I handled thousands of bug reports and feature requests during my time in support, and we got a lot of useful feedback. But insight into the actual performance is critical. Major bugs will go unreported for a long time. People just expect software to be buggy these days.
People have a high threshold for sending emails, and most users will never send them. So if a change only makes things a bit slower / crashier or makes things much worse but mostly just for users who don't complain, you don't know. In this case "Firefox crashes sometimes" remains true even after their fix reduced the number of crashes by 70%.
You wouldn’t know if a product had crashes like this unless you experienced it yourself, or if users sent you messages about it.
Even so, you’d have little way of knowing if the report you’re looking at is widespread or just a tiny anomaly. So you could easily dismiss a report that affected many people, or spent a tonne of time digging into a report that barely ever occurs.
Using telemetry data to enhance software is an incredibly useful and powerful tool. In direct response to your sarcasm: it is a superior method than how it “used to be”.
well they kind of didn't. There was significantly less software in the olden days and you usually had to call someone personally and if you were lucky they fixed your bug half a year down the line when enough stuff had accrued to deliver a new version.
Yes software is significantly better in the sense that nowadays catching bugs through telemetry or other quick feedback and shipping an update in 24 hours before most people notice it is the norm. I don't know why people romanticize the past.
Telemetry just means any data about operations that are sent back to the mothership. Crash logs are just as much telemetry as a click event log. Equating any telemetry=spying is a knee jerk reaction to tech companies abusing telemetry.
> Telemetry is the in situ collection of measurements or other data at remote points and their automatic transmission to receiving equipment (telecommunication) for monitoring
It feels like your response fails to address the point colejohnson66 is making. They are saying "some spying is done via telemetry, but not all telemetry is spyware." The automatic nature of it is orthogonal to its spy-ness or user hostility.
Basically, "automatic" here is an antonym to "manual" e.g. user emailing a bug report.
Personally, I consider the following sorts of telemetry "not spyware":
- coarse grained crash report (build version, arch, etc). this is usually a manual prompt on crash, so "semi-automatic telemetry" is how I'd define it
- anonymized metrics/spans. Basically "foo_bar() took 20ms". These are "automatic" in that the collection and transmission happen without user input, but that's orthogonal to whether the user opted in/out.
Fine-grained usage information is a lot more spyware-esque.
> This is also a good example of the benefit of telemetry:
The benefit can be claimed only if the user consented into their private information being shared with the browser vendor in the first place. With most browser telemetry that is not the case and browser is simply not respecting users' privacy. The right to privacy, as a human right, trumps the 'right' to have the product 'improved'.
Otherwise we can find "benefit" in everything. One of the benefits of hell, for example, is that it is never cold.
If Firefox was selling a physical product in a retail store, they would be able to watch you walk around the store on CCTV, see you avoided an aisle because there is a polar bear lurking, and then remove the polar bear.
But since the product is digital they just have to give it away blind? Never knowing if people even use the features or not?
>they would be able to watch you walk around the store on CCTV
That seems like an unfitting comparison.
The problem doesn't arise in the store, but when using the product at home.
The equivalent of store cctv in this comparison would rather be a server log on the Mozilla website (where people get the product). It's fine to do telemetrics there without me consenting (as long as it's only used by first party) if you ask me.
But after I leave their premises it's none of their business how I use the product.
Sounds like you want it to be ok that your newly bought pack of condoms sends out a message to the factory once you open one.
Software is often described as "tools" and so an analogy to a drill or a magnifying glass is as apt as any. In fact a car is a good analogy to a browser because we use browsers as sort of a "second home" in our computer and it allows us to "visit websites" and a browser is a whole ecosystem unto itself.
So if software is a tool and my drill is monitoring the holes I make in stuff and its efficacy in doing that, that seems fine, but if the drill is sitting in my toolbox being a busybody and sending back everything it can find about me from within the toolbox, that drill is made by assholes, don't you agree?
It does, but I think that's only for whether to send a detailed crash report (which could contain private data). I think (but haven't checked) that the
"number of crashes" telemetry includes cases were you don't choose to allow it to send the full report.
Every telemetry will at least transmit users' IP address (by the nature of how requests are made), which is legally considered private information.
Even if that was not the case, a privacy respecting (any?) browser has no business whatsoever sending any information from my machine, including "I crashed because of OoM" or "I clicked button X" unless I consent with it doing so.
Therefore zero telemetry by default is the only acceptable standard for any browser that claims to be privacy respecting.
This is also a good example of the benefit of telemetry: that they have crash numbers coming back from the field lets them tell that this really did work in practice and get a sense of how much of the problem they've solved.