Hacker Newsnew | past | comments | ask | show | jobs | submit | nostrademons's commentslogin

Most of the economically valuable software written is pretty unique, or at least is one of few competitors in a new and growing niche. This is because software that is not particularly unique is by definition a commodity, with few differentiators. Commodity software gets its margins competed away, because if you try to price high, everybody just uses a competitor.

So goes the AI paradox: it's really effective at writing lots and lots of software that is low value and probably never needed to get written anyway. But at least right now (this is changing rapidly), executives are very willing to hire lots of coders to write software that is low value and probably doesn't need to be written, and VCs are willing to fund lots of startups to automate the writing of lots of software that is low value and probably doesn't need to be written.


Could you give some examples? I can only imagine completely proprietary technology like trading or developing medicine. I have worked in software for many years and was always paid well for it. None of it was particularly unique in any way. Some of it better than others, but if you could show that there exists software people pay well for that AI cannot make I would be really impressed. With my limited view as software engineer it seems to me that the data in the product / its users is what makes it valuable. For example Google Maps, Twitter, AirBnB or HN.

All it takes is a sufficiently big pile of custom features interacting. I work on a legal tech product that automates documents. Coincidentally, I'm just wrapping up a rewrite of the "engine" that evaluates how the documents will come out. The rewrite took many months, the code uses graph algorithms and contains a huge amount of both domain knowledge and specific product knowledge.

Claude Code is having the hardest time making sense of it and not breaking everything every step of the way. It always wants to simplify, handwave, "if we just" and "let's just skip if null", it has zero respect for the amount of knowledge and nuance in the product. (Yes, I do have extensive documentation and my prompts are detailed and rarely shorter than 3 paragraphs.)


You know how whenever you shuffle a deck of cards you almost certainly create an order that has never existed before in the universe?

Most software does something similar. Individual components are pretty simple and well understood, but as you scale your product beyond the simple use cases ("TODO apps"), the interactions between these components create novel challenges. This applies to both functional and non-functional aspects.

So if "cannot make with AI" means "the algorithms involved are so novel that AI literally couldn't write one line of them", then no - there isn't a lot of commercial software like that. But that doesn't mean most software systems aren't novel.


Were you around when any of Google Maps, Twitter, AirBnB, or HN were first released? Aside from AirBnB (whose primary innovation was the business model, and hitting the market right during the global financial crisis when lots of families needed extra cash), they were each architecturally quite different from software that had come before.

Before Google Maps nobody had ever pushed a pure-Javascript AJAX app quite so far; it came out just as AJAX was coined, when user expectations were that any major update to the page required a full page refresh. Indeed, that's exactly what competitor MapQuest did: you had to click the buttons on the compass rose to move the map, it moved one step at a time, and it fully reloaded the page with each move. Google Maps's approach, where you could just drag the map and it loaded the new tiles in the background offscreen, then positioned and cropped everything with Javascript, was revolutionary. Then add that it gained full satellite imagery soon after launch, which people didn't know existed in a consumer app.

Twitter's big innovation was the integration of SMS and a webapp. It was the first microblog, where the idea was that you could post to your publicly-available timeline just by sending an SMS message. This was in the days before Twilio, where there was no easy API for sending these, you had to interface with each carrier directly. It also faced a lot of challenges around the massive fan-out of messages; indeed, the joke was that Twitter was down more than it was up because they were always hitting scaling limits.

HN has (had?) an idiosyncratic architecture where it stores everything in RAM and then checkpoints it out to disk for persistence. No database, no distribution, everything was in one process. It was also written in a custom dialect of Lisp (Arc) that was very macro-heavy. The advantage of this was that it could easily crank out and experiment with new features and new views on the data. The other interesting thing about it was its application of ML to content moderation, and particularly its willingness to kill threads and shadowban users based on purely algorithmic processes.


Agencies have switched to SaaS products and integrations via serverless or low code tooling, exactly because there is already too much of the same.

If you're doing it right, you start with a centralized service; get the product, software architecture, and data flows right while it's all in one process; and then distribute along architectural boundaries when you need to scale.

Very few software services built today are doing it right. Most assume they need to scale from day one, pick a technology stack to enable that, and then alter the product to reflect the limitations of the tech stack they picked. Then they wonder why they need to spend millions on sales and marketing to convince people to use the product they've built, and millions on AWS bills to scale it. But then, the core problem was really that their company did not need to exist in the first place and only does because investors insist on cargo-culting the latest hot thing.

This is why software sucks so much today.


>> If you're doing it right, you start with a centralized service; get the product, software architecture, and data flows right while it's all in one process; and then distribute along architectural boundaries when you need to scale.

I'll add one more modification if you're like me (and apparently many others): go too far with your distribution and pull it back to a sane (i.e. small handful) number of distributed services, hopefully before you get too far down the implementation...


The rule may not hold with AI driven development. The rule exists because it's expensive to rewrite code that depends on a given data structure arrangement, and so programmers usually resort to hacks (eg. writing translation layers or views & traversals of the data) so they can work with a more convenient data structure with functionality that's written later. If writing code becomes free, the AI will just rewrite the whole program to fit the new requirements.

This is what I've observed with using AI on relatively small (~1000 line) programs. When I add a requirement that requires a different data structure, Claude will happily move to the new optimal data structure, and rewrite literally everything accordingly.

I've heard that it gets dicier when you have source files that are 30K-40K lines and programs that are in the million+ line range. My reports have reported that Gemini falls down badly in this case, because the source file blows the context window. But even then, they've also reported that you can make progress by asking Gemini to come up with the new design, and then asking it to come up with a list of modules that depend upon the old structure, and then asking it to write a shim layer module-by-module to have the old code use the new data structure, and then have it replace the old data structure with the new one, and then have it remove the shim layer and rewrite the code of each module to natively use the new data structure. Basically, babysit it through the same refactoring that an experienced programmer would use to do a large-scale refactoring in a million+ line codebase, but have the AI rewrite modules in 5 minutes that would take a programmer 5 weeks.


The reason for the rule of thumb is because you don't know whether you will need to change this code here when you change it there until you've written several instances of the pattern. Oftentimes different generalizations become appropriate for N=1, N=2, N>=3 && N <= 10, N>=10 && N<=100, and N>=100.

Your example is a pretty good one. In most practical applications, you do not want to be setting button x coordinates manually. You want to use a layout manager, like CSS Flexbox or Jetpack Compose's Row or Java Swing's FlowLayout, which takes in a padding and a direction for a collection of elements and automatically figures out where they should be placed. But if you only have one button, this is overkill. If you only have two buttons, this is overkill. If you have 3 buttons, you should start to realize this is the pattern and reach for the right abstraction. If you get to 10 buttons, you'll realize that you need to arrange them in 2D as well and handle how they grow & shrink as you resize the window, and there's a good chance you need a more powerful abstraction.


We do want software defined vehicles, we just don’t want automatic updates or cars that require an Internet connection to work.

They're still accountable to customers/users. If you don't like their products, don't use them. I don't.

The unfortunate thing about this lobbying effort is that it's making the government accountable to Meta, which is the worst of all worlds.


You know meta keeps a shadow profile on every person who is know to every one of their customers, right? So even if you don't use it, they almost certainly have you in their system.

At least when the government is working, there are controls around what they can collect, what they can do with it, and who they share it with. And what they cannot do with it.


There aren’t really. The NSA keeps a shadier profile on you too, and the information Meta has is a subset of that. Snowden disclosures showed that.

I've had Indian coworkers remark similarly. The way they put it was "In India, corruption is democratized. Everybody gets in on the act, and everybody can profit a little bit. In the U.S, corruption is reserved for the very top; only they can profit, and everybody else just suffers. Personally, I prefer the Indian system."

Was kinda eye-opening as a native-born U.S. citizen. I'd always just assumed things worked according to the rules here, but then after he said it, I started seeing corruption at the top all the time.


Things do work according to the rules here.

But the wealthy write the rules.


They may be, but if there are no elections, there is no United States. Constitutionally, its government is predicated on having elected representatives.

I could see Trump trying this, but I also can see dozens of other people or groups, some richer, more powerful, more competent, and more ruthless than Trump, just waiting in the wings for the guardrails to come off to make a play to rule the territory of the former United States. If he tries and succeeds at this it's open-season. It's not a Trump dictatorship, it's a civil war, akin to the Chinese Civil War after the emperor fell or the Syrian civil war after the Arab Spring.


It's interesting that this article is funded by Francis Fukuyama, who famously wrote the "The End of History" [1] in 1992, which argued that the rules-based liberal democratic world order had won and there was no more need for geopolitical realism. This article represents a complete repudiation of his past beliefs, and basically an admission that he was wrong.

Anyway, just as how Fukuyama was right for ~20 years and then very, very wrong, I suspect this essay is too. The U.S. mapped out all the game theory around nuclear war in the 50s and 60s. If you have too many states with nuclear weapons, nuclear war becomes inevitable, just like if you have too many firms in a market a price war becomes inevitable. That's why the U.S. and other nuclear powers have put so much effort into nuclear non-proliferation. North Korea may have been right in the short-term national interest sense to pursue and continue its nuclear weapons program, but the end result here is that most of humanity is going to die in a nuclear war, and we won't have such things as states and nations afterwards.

[1] https://en.wikipedia.org/wiki/The_End_of_History_and_the_Las...


One could argue that a way of proving Fukuyama wrong would require a 'new ideology' leading to global nuclear disarmament.

Honestly in a scenario like global nuclear war I see the Kim family, and by extension NK making it out relatively unscathed.

Being one of the little players who got the bomb between the big guys and the rest of the little guys secured them in the short term, and global conflict is likely to break out between the big guys or some little guys, and NK isn't really instigating much with anyone, so when the bombs fly they're just gonna be overlooked.


The phrasing "HR isn't there to protect you, it's there to protect the company" applies more here.

My experience is also that HR is very reasonable and cooperative with harassment claims. But the thing is that when you have a legit harassment claim, the law is there to protect you. You could make things very expensive for the company in court, and so protecting the company does mean protecting you and treating you respectfully and cooperatively.

If HR investigates and finds you don't have a legit case and that in fact you may have been the instigator, then protecting the company probably means getting rid of you. Your judgment and account of the facts is questionable in that case, and you're a liability from the other side.

I don't know exactly what happened in this case, but in the harassment case I've had to handle as a manager, the (male) employee said that the (female) victim had initiated everything and had this weird fascination with him, while the paper trail that everybody could see clearly showed that he was both the instigator and the one behaving improperly. Projection is strong in cases like these. So it's entirely possible we're not getting the full story from this anonymous blog post.


> and that in fact you may have been the instigator, then protecting the company probably means getting rid of you.

That protects other employees. If you are instigator and then go to complain to HR trying to make them punish the victim, firing you protects everyone around you. And it protects the culture from becoming toxic.

HR can play negative role, but this scenario is not one of those.


> The phrasing "HR isn't there to protect you, it's there to protect the company" applies more here.

I agree (although had interpreted the statement originally differently). Unfortunately, the part about "XYZ isn't there to protect you" applies to so much in life. Even police don't have a responsibility to you protect you (but just the public as a whole). The lesson from stuff like this is often to make sure your best interest are aligned with the most powerful and active stakeholder in the "room".


Or don't engage with people whose interests are not aligned with yours. You can do an awful lot, and carve out a pretty good life for yourself, if the powerful people whose interests are not aligned with yours don't know that you exist. Considering that everybody else has an incentive to align with the most powerful and active stakeholder in the world, this is the only way to avoid a unipolar dictatorship.

Relating it back to the story at hand, the blogpost's author would've done well to just disengage from the coworker who didn't like him, and also to not report them to HR. What I had to tell my report when HR got involved: "The right thing to do here was nothing."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: