Hacker Newsnew | past | comments | ask | show | jobs | submit | cedilla's commentslogin

Brains are not doing linear algebra, and they don't follow a concise algorithm.

What LLM do is even farther away from what neural nets do, and even there - artificial neurons are inspired by but not reimplementing biological neurons.

You can understand human thought in terms of LLMs, but that is just a simile, like understanding physical reality in terms of computers or clockworks.


Fair use is much more narrow than most people think, it's just that most rights-holders are not very belligerent. For example, streaming video games does not fall under fair right, most video essays critiquing films or series use way too much material commentated for fair right, remixing as a whole is not fair use, and most fan works are definitely not fair use. Legal protections don't help here, but the shit-storms companies like Nintendo of America had to endure when they tried to tighten the screws.

And that's in the US, other countries have similar exceptions but they are also usually quite limited.


I know for sure that each and every AI I use wants to write whole novellas in response to every prompt unless I carefully remind it to keep responses short over and over and over again.

This didn't used to be the case, so I assume that it must be intentional.


I've noticed this getting a lot worse recently. I just want to ask a simple question, and end up gettig a whole essay in response, an 8-step plan, and 5 follow-up questions. Lately ChatGPT has also been referencing previous conversations constantly, as if to prove that it "knows" me.

"Should I add oregano to brown beans or would that not taste good?"

"Great instinct! Based on your interests in building new apps and learning new languages, you are someone who enjoys discovering new things, and it makes sense that you'd want to experiment with new flavor profiles as well. Your combination of oregano and brown beans is a real fusion of Italian and Mexican food, skillfully synthesizing these two cultures.

Here's a list of 5 random unrelated spices you can also add to brown beans:

Also, if you want to, I can create a list of other recipes that incorporate these oregano. Just say the words "I am hungry" and I will get right back to it!"

Also, random side note, I hate ChatGPT asking me to "say the word" or "repeat the sentence". Just ask me if I want it and then I say yes or no, I am not going to repeat "go oregano!" like some sort of magic keyphrase to unlock a list of recipes.


There are a few exceptions though, like most mobile games, visual novels (many of which use Python of all languages, due to an excellent framework called ren'py), and of course games written using Unity or XNA, which use .NET languages.

Also, three decades is going a bit too far back, I think. In the mid nineties, C was still king, with assembly still hanging on. C++ was just one of several promising candidates, with some brave souls even trying Java.


In J2ME feature phones Java was all there was, and even today many indies do use it on casual titles on Android.

Which is why after so much resistance not wanting to use the NDK for Vulkan, and keeping using OpenGL ES from those devs, Google is bringing WebGPU to Java and Kotlin devs on Android.

Announced at last Vulkanised, there is already an alpha version available, and they should talk more about it on upcoming Vulkanised.


> Also, three decades is going a bit too far back

My memory was wrong: I was thinking of the Quake 1 engine, but I just looked it up and it’s C with some assembly code, no C++. The reason I remember it being C++ was because Visual C++ was the compiler tooling required on Windows.


You've got it exactly the wrong way around. And that with such great confidence!

There was always a confusion about whether a kilobyte was 1000 or 1024 bytes. Early diskettes always used 1000, only when the 8 bit home computer era started was the 1024 convention firmly established.

Before that it made no sense to talk about kilo as 1024. Earlier computers measured space in records and words, and I guess you can see how in 1960, no one would use kilo to mean 1024 for a 13 bit computer with 40 byte records. A kiloword was, naturally, 1000 words, so why would a kilobyte be 1024?

1024 bearing near ubiquitous was only the case in the 90s or so - except for drive manufacturing and signal processing. Binary prefixes didn't invent the confusion, they were a partial solution. As you point out, while it's possible to clearly indicate binary prefixes, we have no unambiguous notation for decimal bytes.


> Early diskettes always used 1000

Even worse, the 3.5" HD floppy disk format used a confusing combination of the two. Its true capacity (when formatted as FAT12) is 1,474,560 bytes. Divide that by 1024 and you get 1440KB; divide that by 1000 and you get the oft-quoted (and often printed on the disk itself) "1.44MB", which is inaccurate no matter how you look at it.


I'm not seeing evidence for a 1970s 1000-byte kilobyte. Wikipedia's floppy disk page mentions the IBM Diskette 1 at 242944 bytes (a multiple of 256), and then 5¼-inch disks at 368640 bytes and 1228800 bytes, both multiples of 1024. These are sector sizes. Nobody had a 1000-byte sector, I'll assert.


The wiki page agrees with parent, "The double-sided, high-density 1.44 MB (actually 1440 KiB = 1.41 MiB or 1.47 MB) disk drive, which would become the most popular, first shipped in 1986"


To make things even more confusing, the high-density floppy introduced on the Amiga 3000 stored 1760 KiB


At least there it stored exactly 3,520 512-byte sectors, or 1,760 KB. They didn't describe them as 1.76MB floppies.


Firstly, I think you may have replied to the wrong person. I wasn't the one who mentioned the early diskettes point, I was just quoting it.

But that said, we aren't talking about sector sizes. Of course storage mediums are always going to use sector sizes of powers of two. What's being talked about here is the confusion in how to refer to the storage medium's total capacity.


> Of course storage mediums are always going to use sector sizes of powers of two.

Actually, that's not true.

As far as I know, IBM floppy disks always used power-of-2 sizes. The first read-write IBM floppy drives to ship to customers were part of the IBM 3740 Data Entry System (released 1973), designed as a replacement for punched cards. IBM's standard punched card format stored 80 bytes per a card, although some of their systems used a 96 byte format instead. 128 byte sectors was enough to fit either, plus some room for expansion. In their original use case, files were stored with one record/line/card per a disk sector.

However, unlike floppies, (most) IBM mainframe hard disks didn't use power-of-2 sectors. Instead, they supported variable sector sizes ("CKD" format) – when you created a file, it would be assigned one or more hard disk tracks, which then would be formatted with whatever sector size you wanted. In early systems, it was common to use 80 byte sectors, so you could store one punched card per a sector. You could even use variable length sectors, so successive sectors on the same track could be of different sizes.

There was a limit on how many bytes you could fit in a track - for an IBM 3390 mainframe hard disk (released 1989), the maximum track size is 56,664 bytes – not a power of two.

IBM mainframes historically used physical hard disks with special firmware that supported all these unusual features. Nowadays, however, they use industry standard SSDs and hard disks, with power of two sector sizes, but running special software on the SAN which makes it look like a busload of those legacy physical hard disks to the mainframe. And newer mainframe applications use a type of file (VSAM) which uses power-of-two sector sizes (512 bytes through 32KB, but 4KB is most common). So weird sector sizes is really only a thing for legacy apps (BSAM, BDAM, BPAM-sans-PDSE), and certain core system files which are stuck on that format due to backward compatibility requirements. But go back to the 1960s/1970s, non-power-of-2 sector sizes were totally mainstream on IBM mainframe hard disks.

And in that environment, 1000 bytes rather than 1024 bytes makes complete sense. However, file sizes were commonly given in allocation units of tracks/cylinders instead of bytes.


Human history is full of cases where silly mistakes became precedent. HTTP "referal" is just another example.

I wonder if there's a wikipedia article listing these...


It's "referer" in the HTTP standard, but "referrer" when correctly spelled in English. https://en.wikipedia.org/wiki/HTTP_referer


In this case not only the naming was a mistake but the existence of the header itself.


it's, way older in than the 1990's! In computering, "K" always meant 1024 at least from 1970's.

Example: in 1972, DEC PDP 11/40 handbook [0] said on first page: "16-bit word (two 8-bit bytes), direct addressing of 32K 16-bit words or 64K 8-bit bytes (K = 1024)". Same with Intel - in 1977 [1], they proudly said "Static 1K RAMs" on the first page.

[0] https://pdos.csail.mit.edu/6.828/2005/readings/pdp11-40.pdf

[1] https://deramp.com/downloads/mfe_archive/050-Component%20Spe...


It was exactly this - and nobody cared until the disks (the only thing that used decimal K) started getting so big that it was noticeable. With a 64K system you're talking 1,536 "extra" bytes of memory - or 1,536 bytes of memory lost when transferring to disk.

But once hard drives started hitting about a gigabyte was when everyone started noticing and howling.


It was earlier than the 90s, and came with popular 8-bit CPUs in the 80s. The Z-80 microprocessor could address 64kb (which was 65,536 bytes) on its 16-bit address bus.

Similarly, the 4104 chip was a "4kb x 1 bit" RAM chip and stored 4096 bits. You'd see this in the whole 41xx series, and beyond.


> The Z-80 microprocessor could address 64kb (which was 65,536 bytes) on its 16-bit address bus.

I was going to say that what it could address and what they called what it could address is an important distinction, but found this fun ad from 1976[1].

"16K Bytes of RAM Memory, expandable to 60K Bytes", "4K Bytes of ROM/RAM Monitor software", seems pretty unambiguous that you're correct.

Interestingly wikipedia at least implies the IBM System 360 popularized the base-2 prefixes[2], citing their 1964 documentation, but I can't find any use of it in there for the main core storage docs they cite[3]. Amusingly the only use of "kb" I can find in the pdf is for data rate off magnetic tape, which is explicitly defined as "kb = thousands of bytes per second", and the only reference to "kilo-" is for "kilobaud", which would have again been base-10. If we give them the benefit of the doubt on this, presumably it was from later System 360 publications where they would have had enough storage to need prefixes to describe it.

[1] https://commons.wikimedia.org/wiki/File:Zilog_Z-80_Microproc...

[2] https://en.wikipedia.org/wiki/Byte#Units_based_on_powers_of_...

[3] http://www.bitsavers.org/pdf/ibm/360/systemSummary/A22-6810-...


Even then it was not universal. For example, that Apple I ad that got posted a few days ago mentioned that "the system is expandable to 65K". https://upload.wikimedia.org/wikipedia/commons/4/48/Apple_1_...


Someone here the other day said that it could accept 64KB of RAM plus 1KB of ROM, for 65KB total memory.

I don't know if that's correct, but at least it'd explain the mismatch.


Seems like a typo given that the ad contains many mentions of K (8K, 32K) and they're all of the 1024 variety.


If you're using base 10, you can get "8K" and "32K" by dividing by 10 and rounding down. The 1024/1000 distinction only becomes significant at 65536.


Still the advertisement is filled with details like the number of chips, the number of pins, etc. If you're dealing with chips and pins, it's always going to base-2.


only when the 8 bit home computer era started was the 1024 convention firmly established.

That's the microcomputer era that has defined the vast majority of our relationship with computers.

IMO, having lived through this era, the only people pushing 1,000 byte kilobytes were storage manufacturers, because it allows them to bump their numbers up.

https://www.latimes.com/archives/la-xpm-2007-nov-03-fi-seaga...


> 1024 bearing near ubiquitous was only the case in the 90s or so

More like late 60s. In fact, in the 70s and 80s, I remember the storage vendors being excoriated for "lying" by following the SI standard.

There were two proposals to fix things in the late 60s, by Donald Morrison and Donald Knuth. Neither were accepted.

Another article suggesting we just roll over and accept the decimal versions is here:

https://cacm.acm.org/opinion/si-and-binary-prefixes-clearing...

This article helpfully explains that decimal KB has been "standard" since the very late 90s.

But when such an august personality as Donald Knuth declares the proposal DOA, I have no heartburn using binary KB.

https://www-cs-faculty.stanford.edu/~knuth/news99.html


On the most recent 2026 page he writes that The Art of Computer Programming Volume 4C did go into print. Crazy to think about that he works on the series since the 60s and presumably won't finish the remaining Volumes 5,6 and 7.


As someone using computers before the 90s, it was well established at least a decade before that.


Companies pay millions and millions to get away from bespoke software, but not simply because of the costs. Companies want to do their core business, they don't want to also be a software enterprise, and assume all the risks that entails. Even if AI makes creating software 10 times less expensive, that doesn't really change.


SAP is bespoke software.


> billions of people now have access to better translations on deman

As a German speaker, I experience the quality of German language technical documentation steadily declining. 30 years ago, German documentation was usually top notch. With the first machine translations, quality went notably down. Now, with LLM translation, it's often garbage with phrases of obvious nonsense in it.

This is especially true with large companies like IBM, Microsoft or Oracle.

I guess the situation is better for languages where translations only became available with LLM.


Is this a new thing or do you think that most professors were always unable to do their job? Why do you think you are an exception?

I don't believe that your argument is more than an ad-hoc value judgment lacking justification. And it's obvious that if you think so little of your colleagues, that they would also struggle to implement AI tests.


I assume that TVs have bad sound because better speakers just don't fit into their form factor.


Well, when fascists are in power, paper won't help anyone. But at this point, as a European I enjoy enumerated human and civil rights from multiple constitutions and several international treaties, which are directly enforceable by courts at the state, national, and European level.

The human and civil rights guaranteed by the US constitution are a complete joke in comparison, and most of them are not guaranteed directly constitution, but by Supreme Court interpretation of vague 18th century law that can change at any time.


You seem to have missed the Bill of Rights. Which is odd, because whenever we tell you during online arguments that our rights are guaranteed, you all say that absolute rights are dumb and it's actually more sophisticated and European to not have them.

Not that courts, legislators, and administrations haven't tried and succeeded in abridging them somewhat in any number of different ways for shorter or longer periods, but the text remains, and can always be referred to in the end. They have to abuse the language in order to abridge the Bill of Rights, and eventually that passes the point of absurdity.

No such challenge in Europe. Every "right" is the right to do something unless it is not allowed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: