Even the linked blog post indicates that that is not the case. Windows has Copilot buttons on practically every built in application, a taskbar icon, and a dedicated physical keyboard key that people commonly accidentally hit (contractually required for OEMs to provide). They also actively promote Copilot in the OS (particularly Home Edition with nothing disabled e.g. "Tips," Notification Spam, Recommendations, etc).
Nobody can predict what Apple will do tomorrow, but as of today, they aren't really pushing Siri/Apple intelligence really hard particularly after initial setup. None of most of the above for example.
I have Pro Edition and for me Copilot only added two icons. One in Notepad and another one in Paint. I ignore both. There's also the Copilot app that I didn't even know I have installed.
I don't know what happens with Home Edition, but I though the pushback was mainly from Insider Preview?
You can also get rid of both of them very easily with O&O Shutup 10++ (or any of many other GUIs or scripts designed for the same purpose of decrapifying Windows). I toggled off Copilot and Onedrive and haven't seen either in all the years I've been using Windows 11.
You want to take a look at Microsoft office, my bad Microsoft copilot 365...
You can't even select a cell on notepad without a freaking copilot button pooping up every single time. Same on word, that's maddening !
You could argue that windows isn't Microsoft copilot 365, but then, why do people even use windows ? It's always because of the office, my bad, copilot 365 suite.
I just don't think people like having something shoved down their throats. The dedicated Copilot button on keyboards and adding Copilot shortcuts all over the OS (and automatic popups/ads) was far too far.
I think OS level integrations that are opt-in, not opt-out, may even be popular. But they have to be done carefully and tastefully.
I have the same feeling about any kind of integration. We're moving away from Google because we simply do not want to have this kind of forced relationship with products and/or services. It either fits and we'll pick it or it does not and then we don't. We won't pay for things we do not intend to use. And we don't want exposure to products that may constitute a security or a privacy risk.
Email and document collaboration are the big ones, email is probably going to be the easier one of those two, documents much harder because we have a pretty specific workflow that is tied closely to how google docs works. But the decision has been made and I don't care if it is going to cause us to have to work a bit slower or different, this is just unacceptable.
The whole Gemini thing is just a massive embarrassment for Google I really can't follow their thinking, you'd think that after the Google '+' debacle that they would have learned their lesson not to cannibalize your old products to launch a new one.
I actually think Gemini Pro is great and I don't have a problem paying for it, but I don't want its tendrils in Drive and Gmail or anywhere else, it actively damages the product experience there. Everywhere they've tried to integrate LLMs, it generally provides an experience that's inferior to just chatting with Gemini.
The closest to useful it's been is in the GCP console, but it seems to decide at random to forget context, and it might just be Gemini Flash with minimal thinking, which tends to mean it's just repeating things it's already said.
Fixing long-standing complaints, removing Copilot from obnoxious places, improvements to Windows Update and Windows Explorer stability/microstutter/lag, etc.
I congratulate them on seeing sense, and I congratulate Apple on another victory with the Neo. Kind of frustrating that's what it took for Microsoft to finally listen to their userbase.
I agree and sadly I wouldn't hold hopes to see actual meaningful changes (granted - last time had windows was win 7),
My reasoning is from bitter experience. I saw too many these honest talks/commitments - it always this pattern when product/company starts to decline. Suddenly somebody with technical background shows up talks about past mistakes and what need to fix. Even sometimes holds discussion, which is usually very reasonable. But as time goes there only cosmetic changes with excuses like lack of resources, market wind changed this time, too hard make changes due politics and etc.
The only thing I'd add is that not only did he tweet the infamous tweet that caused the backlash, Pavan ridiculed those in the backlash (since deleted). Also, Satya still spews the same "agentic OS" narrative as recent as last week.
So, I hope for the best, but I don't plan on taking them at their word.
Everyone at MSFT who is senior is a lying piece of shit these days. I remember on here Satya being treated like the second coming of Jesus due to his promises. Any comments against him were downvoted.
Absolutely nothing wrong with an "agentic OS", agentic UX is the future of personal computing. The ideal is that something intelligent understands what you want to do and gets it done.
Unless you really think we've reached the pinnacle of user interface with repetitive clicking around and menus.
The problem is with shoving AI down user's throats. Make it an option, not the only option.
> The ideal is that something intelligent understands what you want to do and gets it done.
Maybe? For a couple of decades, we believed that computers you can talk to are the future of computing. Every sci-fi show worth a dime perpetuated that trope. And yet, even though the technology is here, we still usually prefer to read and type.
We might find out the same with some of the everyday uses of agentic tech: it may be less work to do something than to express your desires to an agent perfectly well. For example, agentic shopping is a use case some companies are focusing on, but I can't imagine it being easier to describe my sock taste preferences to an agent than click around for 5 minutes and find the stripe pattern I like.
And that's if we ignore that agents today are basically chaos monkeys that sometimes do what you want, sometimes rm -rf /, and sometimes spend all your money on a cryptocurrency scam. So for the foreseeable future, I most certainly don't want my OS to be "agentic". I want it to be deterministic until you figure out the chaos monkey stuff.
I think your last paragraph is the real issue that will forever crush improvements over clicking on stuff. Once you get to "buy me socks" you're just entering some different advertising domain. We already see it with very simple things like getting Siri to play a song. Two songs with the same name, the more popular one will win, apply that simple logic to everything and put a pay to play model in it and there's your "agentic" OS of the future.
> it may be less work to do something than to express your desires to an agent perfectly well
As I use AI more and more to write code I find myself just implementing something myself more and more for this reason. By the time I have actually explained what I want in precise detail it's often faster to have just made the change myself.
Without enough detail SOTA models can often still get something working, but it's usually not the desired approach and causes problems later.
It all depends on where the the AI is running. The problem with the idea, is that for the majority of Windows boxes where it would be running do not have the bare metal hardware to support local models and thus it would be in the cloud and all of the issues associated with that when it comes to privacy/security. It would be neat, given MSFT's footprint, to look to develop small models, running locally, with user transparency when it comes to actions, but that doesn't align with MSFT's core objectives.
AFAIK the existing Copilot features always use the NPU and do not fall back to the cloud. Given that Windows 12 will require an NPU I don't see why it would fall back either.
This is true for only features of Copilot+. The issue that MSFT faces, especially as it pushes Copilot EVERYWHERE is the reality of the majority if the hardware running Windows does not, and will not have, the NPU required for 12, nor is there the actual consumer purchasing power, to upgrade hardware to have an NPU. This a reality that MSFT just does not seem want to deal with while the push the technology onto consumers because its not based off of the reality of the install base they are dealing with but rather trying to justify their strategic investment into AI in the B2C space without doing the proper product market fit to justify it.
- "summarize the discussions on hacker news of last week based on what I would find interesting".
- "Plan my summer vacation with my family, suggest different options"
- "Look at my household budget and find ways to be more frugal."
There are thousands of things I can think of when it comes to how an agentic OS would work better than the current Screen Keyboard paradigm. I mean all these things I could now do with Claude or Codex and some of these things I already do with these tools.
"Agentic typewriters are the future of typewriting. The idea is that something intelligent understands what you want to type and types it for you. Unless you really think we've reached the pinnacle of typewriter interfaces with repetitive key taps and carriage returns."
See how that sounds a bit silly? It's because it presents a false dichotomy. That our choice is between either the current state of interfaces or an agentic system which strips away your autonomy and does it for you.
Even theoretical AI still has the other mind problem from economics.
Communicating and predicting desires, preferences, thoughts, feelings from one mind to another is difficult.
Fundamentally the easiest way of getting what you want is to be able to do it yourself.
Introduce an agent, and now you get the same utility issues of trying to guess what gifts to buy someone for their birthday. Sure every now and then you get the marketers "surprise and delight", but the main experience is relatively middling, often frustrating and confusing, and if you have any skill or knowledge in The area or ability to do it yourself, ultimately frustrating.
We've already been through this when people a decade ago thought voice was the future of the computer.
When that completely didn't work, we thought that augmented reality was the future of the computer, which also didn't work out.
You need a screen to be able to verify what you're doing (try shopping on Amazon without a screen), which means you also need a UI around it, which then means voice (and by extension agents which also function by conversation) is slower and dumber than the UI, every time.
Meanwhile I have yet to see any brand excited to be integrated with ChatGPT and Claude. Unlike a consumer; being a purely "reasoning-based" agent, they're most likely to ignore everything aesthetic and pick the bottom of the barrel cheapest option for any category. How do you convince an AI to show your specific product to a customer? You don't.
We’ve had computing technology that clearly understands what the user wants to do. It’s called a command line interface. No guessing, no recommendations, no dark patterns, no bullshit.
I see nothing about privacy, spying, forced microsoft accounts and continued locking down of windows that they've been doing.
I see that they're bringing back _some_ of the taskbar options you had in windows 10 (termed it as "introducing"), they promise to make Explorer faster, great. But they also say they're bringing more AI into windows and something about widgets that I don't think anyone cares about.
And lastly they're promising to revamp the place where you go to rant at microsoft, but they're not promising to actually listen to feedback.
Yep, wanted to write the same thing. I remember 2 occasions when even Windows 10 Pro attempted to force me to login into MS account when booting my Laptop. Somehow I skipped it (I don't remember well what I did, since obviously they made it unintuitive). Much later I eventually logged into the account just to buy Minecraft, but boy, this sudden forcing to use MS accounts during the boot was so disgusting. I guess this is becoming much worse now. After learning that MS is making harder and harder to setup local account (eventually it will become impossible?) and even Notepad can ask for MS account, I deleted Windows and installed Ubuntu.
Online accounts are fine when optional, but unacceptable when forced.
This is just cheap damage control, just wait and see if they actually do all of those things correctly. Slow file explorer was an issue since very beginning of windows 11 and they "fix" it only now? But they took time to add copilot to snippet tool?
I find that this happens when you enter folders that have media files like audio files, video files and so on. One way to fix it is to enter one such folder, then remove all columns (like file name, date modified - those columns) and remove all the columns that are media metadata columns. Things like track length, artist, contributing artist or whatever else, then click in the File explorer menu on the 3 dots icon (**) and select View tab, then click 'Apply to folders'. This will apply the column and view settings that you just applied to all such folders.
Now all folders with media files open immediately. Also if you want no wait for video files folders, right click in the folder and select 'View -> Details or View -> List or some other option where it doesn't create thumbnails and it'll load even quicker.
> remove all columns (like file name, date modified - those columns) and remove all the columns that are media metadata columns [...] click in the File explorer menu on the 3 dots icon (*) and select View tab, then click 'Apply to folders' [...] click in the folder and select 'View -> Details or View -> List or some other option
I'm sorry, this is very funny to me in the context of the person upthread arguing about how great "agentic OSes" are. Some people seem to believe that we're living in the future, but I'm pretty sure we're still stuck in Windows '95.
It's not just media files. I'm forced to use Windows 11 on my work PC, and I had to disable the new shell extensions to make the file explorer usable again. It's noticeably faster without the new UI.
Looking up media details is of course one of the main reasons. Thank you for sharing this information. However, all the folders are already configured as general folders and this one specifically has a bunch of PDF files.
When such basic tasks are failing spectacularly, nobody can have any confidence that complex things can be achieved reliably. Instead of spying on their users and trying to squeeze more and more money from them, they should first focus on making a great product and work on making it better, not researching ways to enshitify things.
Nah, analytics. Some PM needs to know which operands are most used so they can optimize the calculator layout to improve the UX. And for the least used operands, they'll take a pragmatic stance and remove them to clean up the interface.
This sounds wildly optimistic. I buy the metrics compilation, but I'll be damned if there's any PM at Microsoft (or Apple or Google for that matter) who's interested in '[optimizing] the calculator layout to improve the UX.'
I need that Drake meme here, where he's negative about the idea "Optimize the calculator layout to improve the UX" and very enthusiastic about the idea "Find ways to get incremental revenue from users of Calculator with ads or selling of data"
Obliterating the performance of a calculator wasn't enough, they actually managed to introduce some all new usability regressions as well. They decided to localize inputs so now periods don't work depending on your locale. Copying numbers also includes the formatted and localized output as well instead of the raw value. Parsers are going to love those commas.
We're reaching Microslop levels we never thought possible. I actually think Claude Code would have done a better job.
Be grateful we are talking about a desktop OS where you are free to not use the built-in apps and even install an arbitrary calculator of your choosing. Unlike dozens of built-in "Apps" on mobile platforms that just exist and you can either use them all as provided, or switch to (Apple or Android).
Right now my start menu randomly crashes. Like all I see is a black box with no icons. I'm impressed with how even basic functionalities break pretty often
Well honestly, that's the easiest problem to fix: just install any of the dozens of excellent and stable third party file managers. I for instance am (or was, while I still used Windows) a fan of Total Commander (actually, when I started using it, it was called Windows Commander). As a bonus, you'll be spared the useless UI and usability changes inflicted upon you with every new Windows version.
If you're going to replace tools as fundamental as the file manager, you may as well switch to a stable and fast operating system like most Linux distributions or Mac.
Yeah, that's what I did, eventually, but some people still need some software that only runs under Windows, or want to play games without messing around with Proton etc. etc.
No. "Commitment" in corporate speak is a synonym for "absolute lack of intention". That's why corps 'commit' to reducing emissions, treating employees fairly, etc, ie. to all the things they will not do. But no suit 'commits' to making money. They just make money. It's just a superficial linguistic gesture. Shakespeare got it.
Saying and doing are very different. They have passed through the "fuck around" phase, and are entering the "find out" phase of this AI journey. Lots of companies are, suddenly.
My employer trained us all on the Gartner hype cycle, tested us on how to remain level-headed before and during the peak of unreasonable expectations and now every single manager in the company is drooling over AI, saying that "this is the future, join us or find another job" and I cannot wait for the curve to come back down to a sane level where intelligence rules behavior as much as it used to. We'll see.
We’ve certainly done the “fucking around” and now we'll see if we "find out" enough to regain our sanity and our humanity.
Can't help but feel like Microsoft is getting pressured by the laptop OEMs to make Windows not suck, because of the MacBook Neo is going to eat all their lunches.
- The OS is getting buggier, with every large update getting press coverage on scary bugs.
- The OS is getting overbearing; constant nagging for upsells on Microsoft products with terrible attach rates.
- The OS is getting focused on hype; the latest trend is AI?Let's force a new button on people that will break decades-old workflows. Let's put AI everywhere.
- The OS is getting slow; there's no focus on speed and the place where 85% of the market resides (laptops) is getting completely trounced by Apple Silicon.
- The OS is getting squeezed; under 300$ it's all terrible e-waste in six months Chromebooks. Over 500$ Apple is aggressively entering the market with the Neo. Over 1,000$ Apple has owned a commanding share of the profits there for decades.
There are 994 more problems with Windows but I've made my point; there's just no end in sight to the problems with Windows. I haven't mentioned it's become a minor part of Microsoft's profits!
They are not saying "we will remove the mandate to use a Microsoft Account." By itself, that shows their "care" is purely corporate, likely driven to calm down furious OEMs who will happily remind them Apple doesn't need an Apple Account to use a now-cheap Mac.
Also, because Nadella can't stand the word, I'll say it right here: Microslop is still making Winslop to help people make Officeslop to then upload to Slopdrive.
Good point, and that one has actually caused logistical headaches. If someone tries to set up a new out-of-box computer without an internet connect, well, you just cannot. Even the previously working bypass has been removed in a recent update.
And, yes, I am aware that Pro/Enterprise don't suffer from this, but a LOT of computers sold are Windows Home/OEM licenses. It impacts a ton of people.
I think the point you're making is fully correct, so consider this a devil's advocate argument...
People claim, you can use Claw-agents more safely while getting some of the benefits, by essentially proxying your services. For example on Gmail people are creating a new Google accounts, forwarding email via rule, and adding access to their calendar via Google's Family Sharing. This allows the Claw agent to read email, access the calendar, but even if you ask it to send an email it can only send as the proxy account, and it can only create calendar appointments then add you as an attendee rather than destroy/altering appointments you've made.
Is the juice worth the squeeze after all that? That's where I struggle. I think insecure/dangerous Claw-agents could be useful but cannot be made safe (for the logical fallacy you pointed out), and secure Claw-agents are only barely useful. Which feels like the whole idea gets squished.
We already have this concept. It’s called user accounts.
Your Gmail account vs my Gmail account. Your macOS account vs my macOS account.
Yes, I can spam you from my Gmail. Yes, I can use sudo on my Mac and damage your account. But the impact is by default limited.
The answer is to just treat assistants as a different user profile, use the same sharing mechanisms already developed (calendar sharing, etc), and call it a day.
That's punting the problem in the same way SELinux did. Agent loops are useful precisely because they're zero config.
Problem: I want to accomplish work securely.
Solution: Put granular permission controls at every interface.
New problem: Defining each rule at all those boundaries.
There's a reason zero trust style approaches won out in general purpose systems: it turns out defining a perfect set of secure permissions for an undefined future task is impossible to do efficiently.
> I think insecure/dangerous Claw-agents could be useful but cannot be made safe
Isn't it a question of when they will be "safe enough"? Many people already have human personal assistants, who have access to many sensitive details of their personal lives. The risk-reward is deemed worth it for some, despite the non-zero chance that a person with that access will make mistakes or become malicious.
It seems very similar to the point when automated driving becomes safe enough to replace most human drivers. The risks of AI taking over are different than the risks of humans remaining in control, but at some point I think most will judge the AI risks to have a better tradeoff.
People seem to dismiss OSWorld as "OpenClaw," but I think they're missing how powerful and flexible that type of full-interaction for safe workflows.
We have a legacy Win32 application, and we want to side-by-side compare interactions + responses between it and the web-converted version of the same. Once you've taught the model that "X = Y" between the desktop Vs. web, you've got yourself an automated test suite.
It is possible to do this another way? Sure, but it isn't cost-effective as you scale the workload out to 30+ Win32 applications.
I keep seeing people make the same mistake as XML made over and over; without learning from it. I will clarify the problem thusly:
> The more capabilities you add to a interchange format, the harder that format is to parse.
There is a reason why JSON is so popular, it supports so little, that it is legitimately easy to import. Whereas XML supports attributes, namespaces, CDATA, DTDs, QNames, xml:base, xml:lang, XInclude, etc etc. They gave it everything, including the kitchen sink.
There was a thread here the other day about using Sqlite as an interchange format to REDUCE complexity. Look, I love Sqlite, as an application specific data-store. But much like XML it has a ton of capabilities, which is good for a data-store, but awful for an interchange format with multiple producers/consumers with their own ideas.
CSV may be under-specified, but it remains popular largely due to its simplicity to produce/consume. Unfortunately, we're seeing people slowly ruin JSON by adding e.g. commands to the format, with others than using those "comments" to hold data (e.g. type information), which must be parsed. Which is a bad version of an XML Attribute.
I think JSON has the opposite problem, it is too simple, the lack of comments in particular is particularly bad for many common usages of the format today.
I know some implementations of JSON support comments and other things, but is is not true JSON, in the same way that most simple XML implementations are not true XML. That's what I say "opposite problem", XML is too complex, and most practical uses of XML use incomplete implementations, while many practical uses of JSON use extended implementations.
By the way, this is not a problem for what JSON was designed for: a text interchange format, with JS being the language of choice, but it has gone beyond its design: configuration files, data stores, etc...
A lot of people dislike that decision not to include comments in JSON, but I think while shocking it was and is totally correct.
In a programming language it's usually free to have comments because the comment is erased before the program runs; we usually render comments in grey text because they can't change the meaning of the program.
In a data language you have no such luxury. In a data language there's no comment erasure happening between the producer and the consumer, so comments are just dangerous as they would without doubt evolve into a system of annotations -- an additional layer of communication which would then not be standardized at all and which then would grow into a wild west of nonstandard features and compatibility workarounds.
I don't dislike the decision at all, FWIW! For data interchange it's totally reasonable. But it does make JSON ill-suited for a bunch of applications that JSON has been forcefully and unfortunately applied to.
> In a data language there's no comment erasure happening between the producer and the consumer, so comments are just dangerous as they would without doubt evolve into a system of annotations -- an additional layer of communication which would then not be standardized at all and which then would grow into a wild west of nonstandard features and compatibility workarounds.
But there's nothing stopping you from commenting your JSON now. There's no obligation to use every field. There can't be, because the transfer format is independent of the use to which the transferred data is put after transfer.
And an unused field is a comment.
{
"customerUUID": "3"
"comment": "it has to be called a 'UUID' for historical reasons"
}
If this would 'without doubt' evolve into a system of annotations, JSON would already have a system of annotations.
> so comments are just dangerous as they would without doubt evolve into a system of annotations -- an additional layer of communication which would then not be standardized at all and which then would grow into a wild west of nonstandard features and compatibility workarounds
IIRC Douglas Crockford explicitly stated that he saw people initially using comments for a purpose like ad hoc preprocessor directives.
Many years ago I worked for a company that did EDI software. When XML was introduced they had to add support for that, just the primitive XML 0.1 that was around at the time with none of the modern complexities. With the same backend code, just switching the parsing, they found either a 100x slowdown in parsing and a 10x increase in memory use or the other way around (so 10x slower, 100x the memory). The functionality was identical, all they did was switch the frontend from EDI to XML.
Since EDI is meant for processing large numbers of transactions as quickly as possible, I hate to think what the move to XML did to that. I moved on years ago so I don't now whether they just threw more hardware at the problem to achieve the same thing that EDI already gave them but now with angle brackets, or whether the industry gave up on XML because of its poor performance.
Come to think of it I'm pretty sure they would have tried blockchain when that got trendy as well.
> that decision not to include comments in JSON, but I think while shocking it was and is totally correct.
Yaml is fugly, but it emerged from JSON being unsupportive of comments. Now we’re stuck with two languages for configuration of infrastructure, a beautiful one without comments so unusable, the other where I can never format a list correctly on the first try, but comments are ok.
YAML also expanded to add arbitrary scripting via a pile of bolt-on capabilities so that it's now a serialisation language that's Turing-complete, or that includes Turing-complete capabilities within it, everything from:
command:
- /bin/sh
- -c
- rm -rf $HOME
to:
state: >
{% set foo = states('...') %}
{% set bar = states('...') %}
{% if foo == FOO and bar == BAZ %}
...
This makes it damn annoying to work with because everyone's way of doing it is different and since it's not a first-class element you have to rethink everything you want to do into strange patterns to work with how YAML does things.
The difference is that in YAML it's kind of expected (the second pseudocode example is from Home Assistant where almost everything nontrivial requires embedding scripting inside your YAML) while I've never seen it done in JSON.
The use cases for YAML that don't involve any sort of scripting vastly outnumber the use cases for YAML that involve embedding scripts into a document; so it's a little unfair and inaccurate to say that "in YAML it's kind of expected".
It is more fair to say that if your document needs to contain scripting, YAML is a better choice than JSON; for the singular reason that YAML allows for unquoted multiline strings, which means you can easily copy/paste scripts in and out of a YAML document without needing to worry about escaping and unescaping quotes and newline characters when editing the document.
Jupyter notebooks are a form of scripting in JSON. Anyway, all this is the fault of specific tools, not of YAML. This is like saying that laundry pods are bad because people eat them.
JSON is obviously perfectly usable, given how widely it's used. Even Douglas Crockford suggested just using a JSON interpreter that strips out comments, if you need them.
And if you want something like JSON that allows comments, and you aren't working on the web, Lua tables are fine.
No, it was obviously and flagrantly incorrect, as evidenced by the success of interchange formats that do allow for comments, including many real world systems that pragmatically allow comments even when JSON says they shouldn't. This is Stockholm Syndrome.
But what can we expect from a spec that somehow deems comments bad but can't define what a number is?
How do you feel numbers are ill defined in json? The syntactical definition is clear and seems to yield a unique and obvious interpretation of json numbers as mathematical rational numbers.
A given programming language may not have a built in representation for rational numbers in general. That isn't the fault of json.
I can't really tell what you're trying to say; JSON also has no representation for rational numbers in general. The only numeric format it allows is the standard floating point "2.01e+25" format. Try representing 1/3 that way.
The usual complaint about numbers not being well-defined in JSON is that you have to provide all numbers as strings; 13682916732413492 is ill-advised JSON, but "13682916732413492" is fine. That isn't technically a problem in JSON; it's a problem in Javascript, but JSON parsers that handle literals the same way Javascript would turn out to be common.
Your "defense", on the other hand, actually is a lack in JSON itself. There is no way to represent rational numbers numerically.
I didn't say that json can represent all rational numbers. I said that all json numbers have an obvious interpretation as a rational number.
So far you haven't really shown an example of a json number which has an ambiguous or ill defined interpretation.
Maybe you mean that json numbers may not fit into 32 bit integers or double floats. That's certainly true but I don't see it as a deficiency in the standard. There is no limit on the size of strings in json, so why have a limit on numbers?
As long as they stay comments there's no harm. As soon as they become struct tags and stripping comments affects the document's meaning you lose the plot.
Worse than that - people will start tagging "this value is a Date" via comments, and you'll need to parse ad-hoc tags in the comments to decode the data. People already do tagging in-band, but at least it's in-band and you don't have to write a custom parser.
See also: postscript. The document structure extensions being comments always bothered me. I mean surely, surely in a turing complete language there is somewhere to fit document structure information. Adobe: nah, we will jam it in the comments.
"Use of the document structuring conventions... allows PostScript language programs to communicate their document structure and printing requirements to document managers in a way that does not affect the PostScript language page description"
The idea being that those document managers did not themselves have to be PostScript interpreters in order to do useful things with PostScript documents given to them. Much simpler.
For example, a page imposition program, which extracts pages from a document and places them effectively on a much larger sheet, arranged in the way they need to be for printing 8- or 16- or 32-up on a commercial printing press, can operate strictly on the basis of the DSC comments.
To it, each page of PostScript is essentially an opaque blob that it does not need to interpret or understand in the least. It is just a chunk of text between %%BeginPage and %%EndPage comments.
This is tremendously useful. A smaller scale of two-up printing is explicitly mentioned as an example on p. 9 of the spec.
> Could you imagine hitting a rest api and like 25% of the bytes are comments? lol
That's pretty much what already happens. Getting a numeric value like "120" by serializing it through JSON takes three bytes. Getting the same value through a less flagrantly wasteful format would take one.
I guess that's more than 25%. In the abstract ASCII integers are about 50% waste. ASCII labels for the values you're transferring are 100% waste; those labels literally are comments.
If you're worried about wasting bandwidth on comments, JSON shouldn't be a format you ever consider, for any purpose.
And both are poor interchange formats. When things stay in their lane, there is no "problem." When you try to make an interchange format using a language with too many features, or comments that people abuse to add parsable information (e.g. "type information") then there is a BIG problem.
It caused all kinds of problems, though those tend to be more directly traceable to the "be liberal in what you accept" ethos than to the format per se.
> In a programming language it's usually free to have comments because the comment is erased before the program runs
That's inherent to the language specification, but it isn't inherent to the document. You have to have a system with rules that require that erasure.
Nothing prevents one from mandating a system that strips those comments out of JSON. You could even "compile" JSON to, I don't know, BSON or msgpack or something.
Just as nothing prevents one from creating tooling to, say, extract type annotations from comments in a dynamically typed language.
I've said it before, but I maintain that XML has only two real problems:
1. Attributes should not exist. They make the document suddenly have two dimensions instead of one, which significantly increases complexity. Anything that could be an attribute should actually be a child element.
2. There should be one close tag: `</>` which closes the last element, which burns a significant amount of space with useless syntax. Other than that and the self-closing `<tag />` (which itself is less useful without attributes) there isn't much that you need. Maybe a document close tag like `<///>`
You'll notice that, yes, JSON solves both of those things. That's a part of why it's so popular. The other is just that a lot more effort was put into maximizing the performance of JavaScript than shredding XML, and XSLT, the intended solution to this problem, is infamous at this point.
The problem of comments is kind of a non-issue in practice, IMO. You can just add a `"_COMMENT"` element or similar. Sure, yes, it will get parsed. But you shouldn't have that many comments that it will cause a genuine performance issue.
However, JSON still has two problems:
1. Schema support. You can't validate that a file before de-serializing it in your application. JSON Schema does exist, but it's support is still thin, IMX.
2. Many serializers are pretty bad with tabular data, and nearly all of them are bad with tabular data by default. So sometimes it's a data serialization format that's bad at serializing bulk data. Yeah, XML is worse at this. Yeah, you can use the `"colNames": ["id", ...], "rows": [ [1,...],[2,...] ]` method or go columnar with `"id": [1,2,...], "name": [...], "createDate": [...]`, but you had better be sure both ends can support that format.
In both cases, it seems like there is an attempt to resolve both of those issues. OpenAPI 3.1 has JSON schema included in it. The most popular JSON parsers seem to be adding tabular data support. I guess we'll see.
XML is a Markup Language. The text is what is being marked up, and the attributes are how to mark it up. Try writing the equivalent of <font family="Arial">Hello world</font> without attributes. I'll wait.
Using XML as a structured data interchange format is abuse. Of course the square peg doesn't fit in the round hole. You propose filing off the corners of the square, making it an octagon, so it will fit the round hole better.
While XML/XHTML aren't spec'ed/evolved to support your fun font sans attribute challenge, certainly modern html does ...
<p>
<style>
@scope { font-family: "Arial" ; }
</style>
Prospero: Where in the world is my teapot? Hello? I'm waiting!
</p>
I know one could argue that that css rule property is essentially an attribute, but it illustrates, like XML plists[1], that one can define the tags arbitrarily to have their content be meta upon sibling/nested content, subsuming attributes' role.
To wit, it seems to me a style issue.
[1] Apple has long used XML plists for data ~ interchange or even archival storage such as .webarchive (ie just a plist flavor). Of course they soon added a simple binary version to compress out some redundancy and encoding waste.
They used an XML nested tag approach, not attributes. Maybe not well rounded pegs and holes but it has worked for them on a large scale over a long time.
1. I think attributes absolutely should exist. They're great for describing metadata related to the tag: e.g. element ID, language, datatype, source annotation, namespacing. They add little in complexity.
2. The point of a close tag with a name is to make it unambiguous what it's trying to close off.
It sounds to me like what you want is not a better XML, but just s-exprs. Which is fine, but not quite solving the same problem.
3. As far as schema support, it seems to me that JSON Schema is well-established and perfectly cromulent – so much so that YAML authors are trying to use it to validate their stuff (the poor bastards) – and XML schema validation, while robust, is a complex and fragmented landscape around DTD, XSD, RELAX-NG, and Schematron. So although XML might have the edge, it's a more nuanced picture than XML proponents are claiming.
4. As far as tabular data, neither XML nor JSON were built for efficient tabular data representation, so it shouldn't be a surprise that they're clunky at this. Use the right tool for the job.
> 1. I think attributes absolutely should exist. They're great for describing metadata related to the tag: e.g. element ID, language, datatype, source annotation, namespacing. They add little in complexity.
No, they're barely adequate for those purposes. And you could (and if you have a XSD you probably should) still replace them with elements. If you argue that you can't, then you're arguing that JSON does not function. You can just inline metadata along side data. That works just fine. That's the thing about metadata. It's data!
You don't need attributes. Having worked in information systems for 25 years now, they are the most heavily, heavily, heavily misused feature of XML and they are essentially always wrong.
Well, now you're a bit stuck. You can make the XSD look at basic data types, and that's it. You can never use complex types. You can never use multiple values if you need it, or if you do you'll have to make your attribute a delimited string. You can never use complex types. You can't use order. You're limiting your ability to extend or advance things.
That's the problem with XML. It's so flexible it lets developers be stupid, while also claiming strictness and correctness as goals.
> 2. The point of a close tag with a name is to make it unambiguous what it's trying to close off.
Sure, but the fact that closing tags in the proper order is is mandatory, you're not actually including anything at all. The only thing you're doing is introducing trivial syntax errors.
Because the truth is that this is 100% unambiguous in XML because the rules changed:
The reason SGML had a problem with the generic close tag was because SGML didn't require a closing tag at all. That was a problem It didn't have `<tag />`. It let you say `<tag1><tag2>...</tag1>` or `<tag1><tag2>...</>`.
Named closing tags had more of a point when we were actually writing XML by hand and didn't have text editors that could find the open and close tags for you, but that is solved. And now we have syntax highlighting and hierarchical code folding on any text editor, nevermind dedicated XML editors.
> 3. As far as schema support, it seems to me that JSON Schema is well-established and perfectly cromulent
Then my guess is that you have worked exclusively in the tech industry for customers that are also exclusively in the tech industry. If you have worked in any other business with any other group of organizations, you would know that the rest of the world is absolute chaos. I think I've seen 3 examples of a published JSON Schema, and hundreds that do not.
> 4. As far as tabular data, neither XML nor JSON were built for efficient tabular data representation, so it shouldn't be a surprise that they're clunky at this. Use the right tool for the job.
No, I think you're looking at what the format was intended to do 25 years ago and trying to claim that that should not be extended or improved ever. You're ignoring what it's actually being used for.
Unless you're going to make data queries return large tabular data sets to the user interface as more or less SQLite or DuckDB databases so the browser can freely manipulate them for the user... you're kind of stuck with XML or JSON or CSV. All of which suck for different reasons.
1. I don't disagree that attributes have been abused – so have elements – but you yourself identified the right way to use them. Yes, you can inline attributes, but that also leads to a document that's harder to use in some cases. So long as you use them judiciously, it's fine. In actual text markup cases, they're indispensable, as HTML illustrates.
2. As far as JSON Schema, you're wrong on all acounts – wrong that I haven't seen Some Stuff, wrong that JSON schema doesn't get used (see Swagger/OpenAPI), and wrong that XML Schema doesn't also get underitilized when a group of developers get lackadaisical.
3. As far as what historical use has been, I'm less interested in exhuming historical practice than simply observing which of the many use cases over the last 20 years worked well (and still work) and which didn't. The answer isn't that none of them worked, and it certainly isn't that XML users had a better bead on how to use it 20 years ago – it went through a massive hype curve just like a lot of techs do.
4. Regarding tabular data exchange, I stand by my statement. Use XML or JSON if you must, and sometimes you must, but there are better tools for the job.
Hard disagree about attributes, each tag should be a complete object and attributes describe the object.
<myobject foo="bar"/>
// means roughly
new MyObject(foo="bar")
But objects can also be containers and that's what nesting is for. There shouldn't ever be two dimensions in the way you're describing. The pattern of
<myobject>
<foo>bar</foo>
</myobject>
is the root of most XML evil. Now you have to know if myobject is a container or a franken-object with a strict sub-schema in order to parse it. The biggest win of JSON is that .loads/.dump make it really obvious that it's for serializing complete objects where a lot of tooling surrounding XML makes you poke at the document tree.
Attributes exist due to it's origin as a markup language. XML is actually (big surprise) a pretty good markup language. Where the tags are sort of like function calls and the attributes are args. With little to no information to be gleaned out of the text. The big sin was to say "hey the tooling is getting pretty good for for these sgml like markup languages. Lets use it as a structured data interchange format. It's almost the same thing". Now all the data is in the text and the attributes are not just superfluous but actively harmful as there is a weird extra data axis that people will aggressively use.
I've been working on an XML parser of my own recently and, to be honest, as long as you're fine with a non-validating parser (which are still compliant), it's really not that bad. You have to parse DTDs, but you don't need to actually _do_ anything with them. Namespaces are annoying but they're not in the main spec. CDATA sections aren't all that useful, but they're easy to parse. As far as I'm aware, parsers don't actually need to handle xml:lang/xml:space/etc themselves - they're for use by applications using the parser. Really the only thing that's been particularly frustrating for me is entity expansion.
If you want to support the wider XML ecosystem, with all the complex auxiliary standards, then yes, it's a lot of work, but the language itself isn't that awful to parse. It's a little messy, but I appreciate it at least being well-specified, which JSON is absolutely not.
CSTML is my attempt to fix all these issues with XML and revive the idea of HTML as a specific subset of a general data language.
As you mention one of the major learnings from the success of JSON was to keep the syntax stupid-simple -- easy to parse, easy to handle. Namespaces were probably the feature to get the most rework.
In theory it could also revive the ability we had with XHTML/XSLT to describe a document in a minimal, fully-semantic DSL, only generating the HTML tag structure as needed for presentation.
I unfortunately disagree that your syntax is "stupid-simple." But it highlights an impedance mismatch between XML users and JSON users.
JSON treats text as one of several equally-supported datatypes, and quotes all strings. Great if your data is heavily structured, and text is short and mixed with other types of data. Awful if your data is text.
XML and other SGML apps put the text first and foremost. Anything that's not text needs to be tagged, maybe with an attribute to indicate the intended type. It's annoying to express lots of structured, short-valued data. But it's simple and easy for text markup where the text predominates.
CSTML at first glance seems to fall into the JSON camp. Quoting every string literal makes plenty of sense in JSON, but not in the HTML/text-markup world you seem to want to play in.
Yeah "impedance mismatch" is a good way of putting it.
I wouldn't say we fall into the JSON camp at all though, but quite squarely into the XML-ish camp! We just wrap the inner text in quotes to make sure there's no confusion between the formatting of the text stored IN the document and the formatting of the document itself. HTML is hiding a lot of complexity here: https://blog.dwac.dev/posts/html-whitespace/. We're actually doing exactly what the author of that detailed investigation recommends.
You can see how it plays out when CSTML is used to store an HTML document https://github.com/bablr-lang/bablr-docs/blob/1af99211b2e31f.... Having the string wrappers makes it possible to precisely control spaces and newlines shown to the user while also having normal pretty-formatting. Compare this to a competing product SrcML which uses XML containers for parse trees and no wrapper strings. Take a look at the example document here: https://www.srcml.org/about.html. A simple example is three screens wide because they can't put in line breaks and indentation without changing the inner text!
As to the simplicity of the syntax I think you would understand what I mean if you were writing a parser.
It's particularly gratifying that you can easily interpret CSTML with a stream parser. XML cannot work this way because this particular case is ambiguous:
<Name
What does Name mean in this fragment of syntax? Is it the name of a namespace? Or the name of a node? We won't know until we look forward and see if the next character is :
That's why we write `<Namespace:Name />` as `:Namespace: <Name />` - it means there's no point in the left-to-right parse at which the meaning is ambiguous. And finally CSTML has no entity lookups so there's no need to download a DTD to parse it correctly.
Haha yeah someone pointed that out to me and I decided to leave it. I just needed a sentence, I'm not actually trying to show off every glyph in a font.
The problem is that engineers of data formats have ignored the concept of layers. With network protocols, you make one layer (Ethernet), you add another layer (IP), then another (TCP), then another (HTTP). Each one fits inside the last, but is independent, and you can deal with them separately or together. Each one has a specialty and is used for certain things. The benefits are 1) you don't need "a kitchen sink", 2) you can replace layers as needed for your use-case, 3) you can ship them together or individually.
I don't think anyone designs formats this way, and I doubt any popular formats are designed for this. I'm not that familiar with enterprise/big-data formats so maybe one of them is?
For example: CSV is great, but obviously limited, and not specified all that well. A replacement table data format could be binary (it's 2026, let's stop "escaping quotes", and make room for binary data). Each row can have header metadata to define which columns are contained, so you can skip empty columns. Each cell can be any data format you want (specifically so you can layer!). The header at the beginning of the data format could (optionally) include an index of all the rows, or it could come at the end of the file. And this whole table data format could be wrapped by another format. Due to this design, you can embed it in other formats, you can choose how to define cells (pick a cell-data-format of your choosing to fit your data/type/etc, replace it later without replacing the whole table), you can view it out-of-order, you can stream it, and you can use an index.
> With network protocols, you make one layer (Ethernet), you add another layer (IP), then another (TCP), then another (HTTP). Each one fits inside the last, but is independent, and you can deal with them separately or together.
It looks neat when you illustrate it with stacked boxes or concentric circles, but real-world problems quickly show the ugly seams. For example, how do you handle encryption? There are arguments (and solutions!) for every layer, each with its own tradeoffs. But it can't be neatly slotted into the layered structure once and for all. Then you have things like session persistence, network mobility, you name it.
Data formats have other sets of tradeoffs pulling them in different directions, but I don't think that layered design would come near to solving any of them.
Some early binary formats followed similar concepts. Look up Interchange File Format, AIFF, RIFF, and their applications and all the file formats using this structure to this day.
I would say that most of the video file formats today are a bit like that too: they allow different stream data encoding schemes with metadata being the definition of a particular format (mostly to bring up a more familiar example that is not as generic).
Have a look at Asset Administration Shells (AAS) -- it is a data exchange format built on top of JSON and XML (and RDF, and OPC UA and Protobuf, etc.).
Eh, this escaping problem was basically solved ages ago.
If we really wanted to make a UTF-8 data interchange format that needs minimal escaping, we already have ␜ (FS File Separator U+001C), ␝ (GS Group Separator U+001D), ␞ (RS Row Separator U+001E), ␟ (US Unit Separator U+001F). The problem is that they suck to type out so they suck for character based interchange. But we could add them to that emoji keyboard widget on modern OSs that usually gets bound to <Meta> + <.>.
But if we put those someplace people could easily type them, that resolved the problem.
But, binary data? Eh, that really should be transmitted as binary data and not as data encoded in a character format. Like not only not using Base64, but also not using a character representation of a byte stream like "0x89504E470D0A1A0A...". Instead you should send a byte stream as a separate file.
So we need a way to combine a bunch of files into a streaming, compressed format.
And the thing is, we already have that format. It's .tar.lz4!
Row separator is great, until you find that someone has put one in a data field. Like your comment. It just moves the problem (control and data mixed together) to a less-used control character.
> Whereas XML supports attributes, namespaces, CDATA, DTDs, QNames, xml:base, xml:lang, XInclude, etc etc. They gave it everything, including the kitchen sink.
But you don't have to use all those things. Configure your parser without namespace support, DTD support, etc. I'd much rather have a tool with tons of capabilities that can be selectively disabled rather than a "simple" one that requires _me_ to bolt on said extra capabilities.
It has the same problem as YAML, there are many, many ways to misconfigure your parser and there lie interesting security vulnerabilities. complex dsls are difficult to implement parsers for.
A simple dsl can be implemented in many programming languages very cheaply and can easily be verified against a specification. S-expressions are probably the most trivial language to write parsers for.
JSON is also pretty simple, but the spec being underspecified leads to ambiguous parsing (another security issue). In particular: duplicate key handling, key order, and array item order are not specified and different parsers may treat them differently.
If you do not go with DTD or XSD, you are only doing XML lookalike language, as these are XML mechanisms to really define the XML schema: a compliant parser won't be able to validate it, or maybe even to parse it.
Thus people go with custom parsers (how hard can it be, right?), and then have to keep fixing issues as someone or other submits an XML with CDATA in or similar.
The problem with this is that it only works as long as everyone instinctively knows that you don't use all the kitchen-sink stuff. It's there but everyone knows you don't use it because that way insanity lies.
And it works more or less OK until someone comes along who doesn't know that you don't use X, and it's in the standard so your implementation isn't standards-compliant and we'll go with your competitor over there instead because unlike you they do support it.
And so, over time, all the crap that "everyone knows" you don't use, gets activated and used. Speaking from experience here, not an invented edge case.
As a data interchange format, you can only depend on the lowest commonly implemented features, which for XML is the base XML spec. For example, Namespaces is a "recommendation", and a conformant XML parser doesn't need to support it.
The problem comes when malicious actors start crafting documents with extra features that should not be parsed, but many software will wrongly parse them because they use the default, full featured parser. Or various combinations of this.
It's a pretty well understood problem and best practices exist, not everyone implements them.
I consider CSV to be a signal of an unserious organization. The kind of place that uses thousand line Excel files with VBA macros instead of just buying a real CRM already. The kind of place that thinks junior developers are cheaper than senior developers. The kind of place where the managers brow beat you into working overtime by arguing from a single personal perspective that "this is just how business is done, son."
People will blithely parrot, "it's a poor Workman who blames his tools." But I think the saying, as I've always heard it used to suggest that someone who is complaining is a just bad at their job, is a backwards sentiment. Experts in their respective fields do not complain about their tools not because they are internalizing failure as their own fault. They don't complain because they insist on only using the best tools and thus have nothing to complain about.
I go back to my statement that skilled people don't complain about their tools because the tools they use are the best and they have nothing to complain about.
An organization that cared about data integrity absolutely could make CSV work. But that same organization would not use CSV because there would be no point in putting themselves through that kind of Mickey Mouse exercise.
> The kind of place that thinks junior developers are cheaper than senior developers…
Unless the junior developers start accepting lower salaries once they become senior developers, that is a fact. Do you mean that they think junior developers are cheaper even when considering the cost per output, maybe?
I believe they're referring to the fact that if almost all of your code is written by junior developers without mentorship, you will end up wasting a lot of your development budget because your codebase is a mess.
Constant erosion of data formats into the shittiest DSLs in existence is annoying. "Oh, hey, instead of writing Python, how about you write in
* YAML, with magical keywords that turn data into conditions/commands
* template language for the YAML in places when that isn't enough
* ....Python, because you need to eventually write stuff that ingests the above either way
.... ansible is great isn't it?"
... and for some reason others decide "YES THIS IS AWESOME" and we now have a bunch of declarative YAML+template garbage.
> There was a thread here the other day about using Sqlite as an interchange format to REDUCE complexity. Look, I love Sqlite, as an application specific data-store. But much like XML it has a ton of capabilities, which is good for a data-store, but awful for an interchange format with multiple producers/consumers with their own ideas.
It's just a bunch of records put in tables with pretty simple data types. And it's trivial to convert into other formats while being compact and queryable on its own. So as far as formats go, you could do a whole lot worse.
Basic dicts, arrays and templates might be the killer feature set for declarative data languages. If everyone coalesces to those eventually, it means there's something to it.
One issue with SQLite is that it's _not_ rewritten every time like JSON and XML, so if you forget to vacuum it or roundtrip it through SQL, you can easily leak deleted data in the binary file.
SGML has at least SP/OpenSP, sgmljs, and nsgml as full-featured, stand-alone parsers. There are also parsers integrated into older versions of products such as MarkLogic, ArborText, and other pre-XML authoring suites, renderers, and CMSs. Then there are language runtime libs such as SWI Prolog's with a fairly complete basic SGML parser.
ISO 8879 (SGML) doesn't define an API or a set of required language features; it just describes SGML from an authoring perspective and leaves the rest to an application linked to a parser. It even uses that term for the original form of stylesheets ("link types", reusing other SGML concepts such as attributes to define rendering properties).
SGML doesn't even require a parser implementation to be able to parse an SGML declaration which is a complex formal document describing features, character sets, etc. used by an SGML document, the idea being that the declaration could be read by a human operator to check and arrange for integration into a foreign document pipeline. Even SCRIPT/VS (part of IBM's DCF and the origin of GML) could thus technically be considered SGML.
There are also a number of historical/academic parsers, and SGML-based HTML parsers used in old web browsers.
> XML supports attributes, namespaces, CDATA, DTDs, QNames, xml:base, xml:lang, XInclude, etc etc. They gave it everything, including the kitchen sink.
Ah, the old "throw a bag of nouns at the reader and hope he's intimidated" rhetorical flutist. These things are either non-issues (like QName), things a parser does for you, or optional standards adjacent to XML but not essential to it, e.g. XInclude.
> Ah, the old "throw a bag of nouns at the reader and hope he's intimidated" rhetorical flutist.
The accusation here is a defleciton. OP's point isn't a gish gallop, it's that xml is absolutely littered with edge cases and complexities that all need to be understood.
> optional standards adjacent to XML but not essential
This is exactly OP's point. The standard is everything and the kitchen sink, except for all the bits it doesn't include which are almost imperceptible from the actual standard because of how widely used they are.
XInclude isn't part of the standard, and IME, a minority of systems support it anyway. The OP's comment is an obvious gish-gallop. You can assemble a similarly scary noun list for practically any technology.
Probably the same kind of person who tries to praise JSON's lack of comments as a feature or something.
IME there are two kinds of xml implementations, ones that handle DTDs and entitie definitions for you and are insecure by default (XXE and SSRF vulnerabilities), and ones that don't and reject valid XML documents.
If Apple continues with the budget Neo brand into a 12 GB iteration, I can see this becoming more realistic (rather than a novelty). That being said, Parallels may need to review its licensing with a budget tier in mind. Few will buy a cheap computer and then pay what Parallels charges for a license (regardless if one-time or subscription).
They need to introduce something below the Standard license targeting the Neo. What I'd personally consider is:
- Standard gets 16 GB vRAM (to perfectly target the base MacBook Air). But leave it at 4-6 vCPUs to not compete with the Pro (still for general computing, not power-users)
- New "Lite" tier with 8 GB vRAM max for the Neo (4 vCPUs). Increasing to 12 GB vRAM if the Neo does.
Then you target a $89 price point one-time-purchase for the "Lite" tier. Essentially three plans, targeting your three major demographics: budget, standard, and pro/power-user.
This isn't a novelty it will crush the low end of the PC market. No one cares if the next iteration will be better with 12GB of ram. The workloads that people say that 8GB can't handle will be ones that the actual users will either wait or tolerate. I've been noticing that people who review the Macbook Neo basically don't get the point [1] and just the headline of this article matters that VMs work and thats a big win. The most ridicuous thing about the laptop is that it appears to be reparable which sort of tells me this is a template similar to the M1 Air of the future laptop designs that Apple will come out with. [2]
> This isn't a novelty it will crush the low end of the PC market.
You took what I said out of context and then replied to something else. Running Parallels on a Neo is a novelty. Parallels is both what the thread is about AND what my reply was expressly about.
Nobody can reasonably read what I wrote, in context, and believe I was referring to the computer itself as a novelty.
I saw the other day people complaining about AI slop being posted on this site by new accounts - which I agree is bad.
Someone suggested that people with 10k karma and/or 10 years subscription to this site should be able to do things (such as auto-ban) to those accounts.
The account that misrepresented your comment and thus acted in bad faith is one of those 10k+ accounts.
To me, this is a data point showing the fallacy of long term subscription and/or karma accrual as evidence of their quality/good faith abilities
I admit now after rereading that I did misrepresent what they said and I should have read their comment more closely and it was a knee jerk reaction and that its my fault.
I take it's repairability slightly differently. That's because it is highly modular, and I think the reason for that is longevity. They put a lot of engineering effort into this thing, and so at this price point that has to pay back over a lot of devices over a long period. This design isn't going to change for many years, but the internals will iterate independently.
Windows doesn't run "just fine" on 4 GiB of RAM. I had a laptop with 6; Windows 10 became barely usable. If you want to run one, small, program at a time I think you'll be ok. Forget about web browsing; you'll get one tab and it'll be slow.
Agreed. Windows 10/11 can run just fine on 4GB of RAM. You just can't run anything inside of Windows 10/11 with 4GB of RAM.
The last version of Windows that felt like 4GB of RAM was performant for me with applications was Windows XP. Not that every computer running the 32-bit edition of Windows XP could even see/utilize a full 4GB of RAM properly, but at least it was fast.
I ran a Windows 7 system with 3GiB as a gaming machine and it was just fine. Windows 7... the last Windows release that was acceptable-ish. Memories...
A lightweight Linux desktop can keep a decent amount of browser tabs (using Firefox; avoid Chrome) on 4GB RAM if you set up compressed RAM properly. It's not foolproof like 8GB would be, but it's absolutely fine for casual use.
2015 laptop, spinning rust. Nevertheless, it was at least somewhat acceptable at purchase, but crapware installed with successive system updates brought it to a standstill. An SSD might've helped, but not by much. I wiped it and put Kubuntu on it to give to my wife, for whom it ran acceptably. She gave it back when she got a shiny new MacBook Air.
A SSD would have made an absolutely massive difference.
Source: I have clients that still have 2nd/3rd gen i5 systems running 3-4 GB of RAM with Windows 10 and they're tolerable solely thanks to SSDs. Swapping that much on a hard drive would just be painful to use.
Nobody should be interactively using a computer post-2018ish (whenever SSDs fell below $1/GB) that's booting and running primary applications off spinning rust. They're perfectly fine for bulk storage drives but anyone waiting for an operating system booting off one has wasted enough of their life in the last year to have paid for the SSD. Companies that wouldn't spend $100 on an upgrade are literally throwing money away paying their employees to wait on a shit computer.
> Heck, you can get 8GB Windows laptops with twice the SSD size of the MacBook Neo's for a little over half of the Neo’s price (again, at full MSRP.)
Let's see one of these $300 Windows laptops with 512GB of SSD (in a reasonable format, e.g. not an SD card), a body that isn't disposable, a screen that isn't a dim potato, a CPU that's within 20% of the Neo's performance, and a GPU that isn't embarrassed to be called a GPU.
I think you're misunderstanding, of course they do not exist. People don't get $300 windows laptops for their performance, build quality, or anything similar. Nor do they care about screen brightness, and 256GB is fine for the use case which is running word or some other simple application for as little $$ as possible.
The implication in the comparison is that they’re similar. The similarity between a Neo and a $300 PC is that they can both boot up and run at least one program. That’s about where it ends.
They existed on AliExpress. Chuwis and the likes (though the latest ones are lying about the CPU model). You usually get nvme storage, not the very best of course but it does the job. And IPS display. It's overall ok stuff, but the memory crisis has pushed them above 300 now.. They usually run N150s.
I also got two N100 NUC like boxes with 16GB DDR4, 512GB NVMe for €115 each. Bought them as the memory crisis was starting. One is now my home assistant, the other one runs matrix.
I still use an ancient chuwi for going to the makerspace. It's still got hours of battery.
I went looking, and did find stuff on Amazon, though none were made of an aluminum chasis, and none had the geekbench score anywhere near, and none had the screen brightness.
As I write this, the top Amazon search for "windows laptop" is a
> Lenovo IdeaPad 15.6 inch Business Laptop with Microsoft 365 • 2026 Edition • Intel Core • Wi-Fi 6 • 1.1TB Storage (1TB OneDrive + 128GB SSD) • Windows 11
The person who approved describing its 128GB storage as 1.1TB should be hanged.
The CPU also has[0] 31% of the single core and 14% of the CPU Mark rating. The screen has 220 nits (vs 500) brightness, comes with 4GB of RAM, and weighs 30% more. At least it's half price, though.
The shopping situation for Windows laptops is utterly dire.
I recently helped a friend ditch Windows for Linux on an 8GB budget laptop he had. It had win11 on it which could barely function with nothing running, kept swapping like crazy to it's anemic eMMC "SSD". Windows can't really run reasonably with 4GB of RAM. It will only technically boot.
Neo is powered by a fast and battery-friendly chip. It's definitely not a novelty any more than Chromebooks or Windows 11 notebooks with integrated graphics have been.
Don't underestimate what you can do with the 8 GB RAM. My mid-tier, Intel 2019 Macbook Pro with 32GB RAM suddenly died by the end of 2023. I quickly got a basemodel 256GB/8GB MacMini M2 as a replacement. While initialy supposed to be a temporary replacement until my MBP gets fixed, I ended up using it for another year as my main daily machine for everything, inluding professionally (fullstack software dev).
There was simply no need to upgrade, the MacMini was faster in all regards then my Intel MBP. Out of curiosity of its capability I wanted to see how gaming performs - I ended up playing through all three Tomb Raider reboots (Mac native, but using Rosetta!) at 1080p in high settings. Absolutely amazed how fast it was (mostly driven by the update to M2).
Only one thing ever made me notice the lack of RAM, and that was when I was running the entire test suite of our frontend monorepo. This runs concurrently and fires up multiple virtual browser envs (vitest, jest, jsdom) to run the tests in parallel. Stuttering and low responsiveness during the execution, but would complete in 3-4 minutes - it takes around 1 minutes on my current M4 MBP.
VMWare Fusion is free, even if it is a pain in the butt to download. It also has GPU paravirtualization for Linux/Windows which is the only reason I use a proprietary VMM on macOS these days.
Because I was fed up with parallels subscription model and they make me pay for the upgrade the non-subscription version with every new macOS release, I dropped parallels for UTM. I barely need windows, only every other month or so and often just for some small tasks. UTM is nice, but performance running windows is waaay below parallels. It is free, however, so I won't complain.
The performance story doesn't really make sense as both UTM and Parallels use Apple Hypervisor Kit which pretty much is the hypervisor running Windows. It should be identical.
Classic VM solutions like Virtualbox, VMware, Parallels etc. always come with guest tools and driver packages for the guest that have a massive impact on performance. Just because both solutions use the same hypervisor doesn’t mean they perform equally.
Intent looks interesting but the fact that they have their own credits system turns me off of it. I pay $200/mo for Claude Code and that's enough for me; I wish I could use Intent with that.
http://tart.run works great for running macOS (and Linux) VMs on macOS if you're technical. It's free for non-commercial uses too! (Don't think there's GPU acceleration tho).
There’s something called menu pricing, in order to keep its existing customer base buying their more expensive higher end models there need to be an unjustifiable drop in quality to switch.
The gap in spec is no mistake, if it was appealing enough for existing air-book users to downgrade it would cannibalise their bottomline.
I'd suggest you watch a teardown video. The Neo is absurdly repairable compared to just about anything in its category. It is extremely modular, and uses screws.
Twenty years ago, I worked part-time in a laptop repair facility for a large educational institution; this computer would have been a godsend (e.g. the first MacBooks had hundreds of screws, plastic everywhere).
Tell me about it. Even decades later, whenever my limbic dreaming needs "random technical noise" it still pulls up images of early 2000s laptop screw bins.
For nightmares, the screwbin either tips over repeatedly, or a dropped screw poofs indefinitely. Sometimes I wake up sweated, snackycaked crumb constellations jambed up'gainst bedsheets and fattie.
I skipped that entire generation, but the modern silicon keyboards are slick. My workshop computer is a 2012 MacBook "Pro" (disabled GPU), which also has fantastic keys. Best Apple keyboard ever has to be the 12" PowerBook G4, but that may just be nostalgic...
----
My major critique of the Neo is: for its intended market (younger), it should be more durable, not less — why is there no MagSafe power connector?
From a computer repair technician's POV, there will be lot$ of U$B port replacement$, due to power supply abusers (have you seen some students' charging cables?!). From manufacturer's POV: if they had MagSafe, they probably wouldn't need separate USB ports (IMHO).
It's almost guaranteed that the second revision of this product line will use MagSafe (you own the patent already!).
It is literally the main reason I would purchase an Air (were I in the market — am not). 15" screen would be reason#2.
There were several generations of Apple laptops that mysteriously didn't have MagSafe — I never bought one — very glad to see its return on my own M3MBAir15".
On a rational level it isn't surprising that the "compute" part is so small, given its origins, but for some reason it still caught me by surprised seeing something barely larger than a Raspberry Pi.
But, yeah, this thing is crazy modular. I particularly want to call out how trivial it is to replace the ports, given how common of a failure point they are. With the keyboard/monitor being more involved, but absolutely still approachable.
I believe he finds just a single piece of light adhesive keeping a cable in place, everything else (inc. the battery) is screws only.
It looks like it's still bigger than the logic board on the 12" MacBook from 2015.[1]
I really wish Apple would resurrect that form factor, as every other MacBook since has seemed bulky in comparison. Thanks to OpenCore Legacy Patcher[2], I still haven't gotten a newer mac. With a modern M series chip, it wouldn't have such rough tradeoffs in battery life and performance. I'd definitely buy it.
The Neo actually has similar dimensions to the 12” overall, though not as tapered. That’s possible because it has a much slimmer bezel. The Neo is about a third heavier though.
Very true. In a way this is demonstrating the tradeoff between cost, repairability and size/weight.
The Neo is getting a lot of praise because it's all modular and screwed together. That should make it very easy to repair and also for Apple to do iterative upgrades, but that makes it bigger and heavier and size/weight does matter to people. Hence this thread.
What version of MacOS are you running on yours? I have a 2017, 16GB, 1.7ghz and it's DOG slow on Ventura, even with reduce motion and reduce transparency. I have considered downgrading just to see if there's improvement.
I'm on Sequoia (v15.7.4). I have the original 2015 model (1.1Ghz Core M-5Y31, 8GB of RAM). It's a little slow, but fine for what I use it for (web browser, syncing music/photos to/from my phone, simple coding tasks). My main gripe is the battery only has 60% of its original capacity. Apple won't replace the battery, and doing it yourself is pretty tricky. At some point it'll break or no longer get security updates, and then I'll probably get a MacBook Air.
If you're using OpenCore Patcher, it's important to install the root patches to enable graphics acceleration. Otherwise it'll be ridiculously slow.
It's sort of ironic that at the time, there were many complaints that Apple made its devices thin at the expense of more important features. Now that M series MacBooks are thicker again, there are complaints that they are too thick.
I owned an i9 MBP with a discrete GPU. It absolutely was too thin. The CPU and GPU ran hot, it throttled like crazy. It would drain battery while USB-C docked while idling. Worst laptop I've ever owned.
The M1 Max I replaced it with was the opposite. I don't think I heard the fans for the first month. But it was much larger.
Based on the fanless Air, I strongly suspect an M1 Max in the old chassis would have been totally fine for non synthetic workloads and an M1 Pro would probably have been fine in all scenarios.
But I think they over corrected on the chassis design when they were shipping borderline faulty products and haven't walked it back yet.
I speculate they gave themselves a lot of thermal engineering margin to bump up TDP with the M-series MBP design (or perhaps they underestimated how good the M-series chips were going to be) The battery being at the TSA limit of 100Wh is quite nice as well. Another benefit is that it now differentiates the "Pro" line from the rest of the laptop lineup quite significantly. For most people the Air has enough power now and its plenty thin and light. The pro line is for "true" pros with actually intense workflows.
I'm a dev and the MBP line is definitely overkill for me. The 15" MBA handles everything I can throw at it.
By dimensions, assuming the 2015 ("eleven year old") version, the 13" M4 MBA is 0.17" wider, 0.9" deeper, and 0.32 lbs heavier. Where it's harder to compare is thickness. The M4 is 0.44" thick where the Intel one was tapered (0.11"-0.68").
Kind of hard to see that as "HUGE" in comparison. Bigger? Yes, but not really huge.
Apple could win a lot of likes if they added some form of storage expansion. Even a recessed USB-C for those tiny drives would go a long way.
Doesn't need to be super fast or fancy, just extend the life of device a little more.
Soldered internal storage and ram is fine if I can store my non-essentials in a cheap drive. Or my essentials in a way that is recoverable if device fails. iCloud helps for photos and families, but it's still far too slow if you don't live near it.
Nobody can predict what Apple will do tomorrow, but as of today, they aren't really pushing Siri/Apple intelligence really hard particularly after initial setup. None of most of the above for example.
reply