Hacker Newsnew | past | comments | ask | show | jobs | submit | Zababa's favoriteslogin

Yeah, saying you're making tofu for dinner is interpreted ideologically in the rural US. Along with your choice of beer and snack chips. I'm not surprised that ideology (and anger!) has been commoditized in the states; after all, if there's a single founding principle in our nation, it's staying out of my way unless your way involves me making a buck. But it's more than a little depressing how easily people get schnookered.

Anyway, dinner is meat, and dessert is this: take one part 65% chocolate chips, melt in microwave[1], add to equal weight silk tofu in bowl of 12 c food processor, whir until combined, spoon into oreo cookie crust, refrigerate 6 hours or overnight. You might never make mousse again. I mean, mousse is better, but it's also 20x fussier. Want some deeper flavor? Add 1 tsp coffee powder, 1 tsp vanilla to the choc/tofu before whir, or lighten[2] with a few tbsp heavy cream.

I've got friends who can't boil water, and I taught 'em this recipe, and to this day it's their best dessert they know how to make. I'm a little proud of the simple recipes like that.

[1] Do you know how to melt chocolate in the nuker without scorching it? Look that up first.

[2] Yes, lighten! The cream catches more air, lifts up the "tofousse" a bit. If you like it even more puffy, add a tub of dairy whip topping, but this makes for a floppy pie when sliced unless you add gelatin.


My running note on self-hosted password managers:

1password: since version 8, dead due to cloud-only-now, not standalone, its over-usage of Electron web and its many unverified modules/libraries; remote storage of password only in encrypted form. Key stays offline.

vaultwarden: yet another Electron web app and its usage of many unverified modules/libraries; remote storage of password only in encrypted form. Key stays offline.

KeepassXC, with syncthing: leading contender, best-self-hosted solution that stores password remotely only in encrypted form. but still has iOS unverifiable source code imposed by Apple. Key stays offline.

NordPass: best zero knowledge remote storage; has apps for Windows, macOS, Linux, Android, and iOS. When it comes to browser extensions, one would be hard-pressed to find a wider selection. You can install NordPass on Chrome, Firefox, Safari, Opera, Brave, Vivaldi, and Edge. Not open-source.

LassPass, hacked in 2022; remote storage of raw passwords

pwsafe, still is the safest CLI-only solution to date. The design of pwsafe (Password Safe CLI) got started by Bruce Schnier, the crypto security privacy expert. In pwsafe, unbroken TwoFish algorithm is still being used instead of currently safer Argon2i, simply because it's faster (after millions of iterations). The recommended client-wise of PasswordSafe is still Netwrix (formerly MATESO of Germany) PasswordSafe with YubiKey but stay away from its web-client variants due to ease of memory access to JavaScript variable names (by OS, browser, JS engine, and JS language)

Only downside for ANY PasswordSafe-design GUI client is trusting yet another app repository source.


i collect these for fun! adding to my collection https://github.com/sw-yx/spark-joy/blob/master/README.md#dro...

more like this:

- https://andybrewer.github.io/mvp/ mvp.css

- https://yegor256.github.io/tacit/

- https://github.com/alvaromontoro/almond.css has thin fonts

- https://picocss.com/ Elegant styles for all natives HTML elements without .classes and dark mode automatically enabled.

- https://simplecss.org/demo 4kb incl dark mode

- https://watercss.kognise.dev/ Small size (< 2kb)

- https://github.com/xz/new.css (https://newcss.net/) 4.8kb sets some sensible defaults and styles your HTML to look reasonable

- https://github.com/oxalorg/sakura supports extremely easy theming using variables for duotone color scheming. It comes with several existing themes

- https://github.com/susam/spcss


Copilot is a verbose savant heavily afflicted by Dunning-Kruger... but an extremely useful one.

Do you remember how Googling was a skill?

Learning to Copilot, Stable Diffusion, GPT are exactly the same thing.

Copilot's full power (at this time) does not exist in generating reams of code. Here are a few things it excels at:

- Snippet search: Say you can't remember how to see if a variable is empty in a bash conditional, ask.

- Template population: Say I have a series of functions I need to write in a language without good meta programming facilities. I can write a list of the (all combinations), write one example, the ai will pick up on the rest.

- Rust: If I get trapped because of some weird borrow checker issue with `fn doit(...`, I begin rewriting the function `fn fixed_doit(...`, and 9/10 times Copilot fixes the bug.


Ben Thompson has analyzed in his blog how Cloudflare applied the Disruptive Innovation model - https://stratechery.com/2021/cloudflares-disruption/. The CTO of Cloudflare sort of acknowledged that on HN back then - https://news.ycombinator.com/item?id=28708371

There must be new niches ("value networks" as per that blog) that Cloudflare finds not worthwhile to serve. Usually they're at the lower end or the "nonconsumption" case, as described in the above blog post. That's one way to chip away at a bit of their customer base.

But it's best to just focus on customers and not imagine you're "competing with Cloudflare". Nothing to be gained by framing it that way.


What you're observing is not the rise and fall of best practice and good engineering. All you're seeing is the hype cycle.

Best practice and good engineering in web dev is the same as it is in any other software field: the best stack is the one you know cold.

Unfortunately, there are many popular stacks, and all of them have something to say about all the others. And 80% of learning materials for any given stack aren't from official sources, but are instead blog posts and videos and tweets from tech influencers. When they parrot those same criticisms, it lends the air of a grassroots shifting of the whole community.

The fact of the matter is that despite the huge levels of hype flux, large, popular, stable apps are still being made with technology "boring" tech.

Despite SPAs, rails and Django apps are still built rendering server side templates and sprinkling jQuery where necessary. Despite GraphQL, REST APIs are still the state of the art for most APIs. Despite kubernetes and docker, many apps are still deployed to heroku or VPS providers.

It can be intimidating for newcomers because it can be difficult to separate hype from fact. My advice is to pick something, the more boring the better, and know it cold and ignore all claims of its death.

To address concerns about projects dying, that is pretty legitimate. Unfortunately webdevs deploy trusted code into a highly untrustworthy and antagonistic environment. New technologies do come to play and requires adapters and plugins to be written. I wouldn't worry about overall size of community as long as security issues are getting addressed.


Not at all. I think it looks deceptively simple so programmers don't waste too much time learning it properly. But you can take up a good book (such as Joe Celko's) and set up a good foundation in a short time.

Then you might like mainland Chinese crime movies. My two favs:

- Black Coal, Thin Ice

- Ash Is Purest White


You try 3D printing a house on a Pentium, or running inference models. We needed the compute. We need so much compute it boggles the mind and we don't know what to do with it.

Only then, as we are drowning in compute, does it spill over into other areas and allow that compute to be used as leverage towards enhancing or automating areas compute has yet to break into. Only then are crazy ideas like having large clusters of transistors act as neurons in a neural net actually possible, and efficient enough.

This is at least what I mean when I occasionally say something flippant like "(technology/computers/software) will eat the world". Maybe our relentless pursuit of more compute isn't the best way to solve complex problems, but it increasingly seems like it may be a way.


Curious what advice I can get from HN on this. I've loved drawing since I was a little kid, but gave up in middle school because I just didn't know how to get better and didn't have access to improving (no money, nobody I could ask, etc). As an adult, how can I learn to draw? Ideally in a self-guided way

When I was younger I used to mess with some really obscure shit called oneirogics. It's obscure enough that I can't even google it.

It's basically substances or methods you can use in order to dream harder. Cholinergic (affecting choline, basically the opposite of anticholinergics which are horrible pesticides ruining everything), and other ways (Maracuya (passionfruit) leaves under the pillow with its distinct smell to remind you you're dreaming when you're asleep, WILD lucid dreaming, etc).

There's like communities around these, and they're the wildest things you'll ever experience. Ever since the dream where I jumped off a train that was taking me to a genocide centre, lived somehow despite this fact, and was stabbed by cosmic nazis by bayonets to death in painstaking detail I've never had anything resembling a nightmare ever since (like a recognizance things aren't happening for real and I can just wake up and laugh) and pretty much stopped that shit because it's super weird and even more underground than the illegal drug scene.

I met a guy in Colombia who was a professor who was studying indigenous people doing this kind of thing, hence the maracuya leaf method. It's like another world, where people deliberately dive into their dreams and come back with insights.

There's an extremely interesting paper to me published in Nature Neuroscience which is the best insight into dreaming I've ever seen, as how it's probably an adaptation for anticipation in a virtual reality kind of environment. When you consider when you see animals like your cat or dog twitch around when sleeping while lightly barking/meowing it makes total sense [1]

[1] https://www.doaks.org/research/byzantine/scholarly-activitie...


One thing to check: Your hearing

I went through a period of life in my late 30s where I felt I was getting stupider. I was struggling to pickup new concepts and get a handle on new situations quickly like I used to. I was made redundant during a 'restructure' and I strongly believe it was due to these issues I was dealing with.

Several months after being made redundant I got my hearing checked - I had moderate hearing loss in the higher frequencies (where speech occurs). After getting hearing aids, it changed my life.

HA made me realise I had been relying on lip reading and context to understand what was being said. When I encountered new technology/concepts, I didn't have pre-existing base knowledge or context to 'fill in the gaps' of my hearing.

HA reversed all of the issues I had previously been experiencing.


IME: in my career I’m always a bit under or overworked it seems. Getting better at boundaries, but I suspect this is the case for many in our industry. Choose your poison, and realize it may not be what you want your whole life.

Postgresql. No framework (well there is rather low level one of my own but it is fluid). Using 4 libs for JSON, postgres, HTTP, logging. I expose basically JSON based RPC. Peers / Clients post JSON commands and receive JSON replies.

Upon startup all data (except couple of giant tables) is sucked from DB into RAM into highly efficient data structures hence most reading requests are handled in microseconds. Those 2 giant tables are loaded into the same structs but partially. This is usually enough as RAM keeps last few years worth of data and requests that need something older come just couple of times per month. Only writes touch the database and immediately reflected in RAM as well.

This is just an example. I write game like servers as well and those use different approach. Generally I treat each project / product individually.


These types of books are my favorite style. I typically use whatever language I'm trying to learn and get more practice with (usually F#, Elixir, or Racket) instead of whatever the book suggests, if it does suggest one (usually Python or some OOP language). But sometimes the language used in the book is fun, like Logo or Prolog.

* Mazes for Programmers (by the same author of The Ray Tracer Challenge but is not as language agnostic)

* The Elements of Computing Systems: Building a Modern Computer from First Principles

* Programming Machine Learning: From Coding to Deep Learning

* Writing an Interpreter in Go and Writing a Compiler in Go (both by the same author)

* Introducing Blockchain with Lisp: Implement and Extend Blockchains with the Racket Language

* Exploring Language with Logo

* Visual Modeling with Logo

* Turtle Geometry: The Computer as a Medium for Exploring

* Thinking as Computation: A First Course (uses Prolog to solve various problems in small projects)

* Functional Web Development with Elixir, OTP, and Phoenix: Rethink the Modern Web App


> It doesn't convey much. $scalar or @array, that's all.

You're vastly underinformed. It conveys the mode of access. It shows at one glance if the expression is singular or list or pairs.

                      … of an array   … of a hash
    the whole         @a              %h
    one particular    $a[0]           $h{foo}
    values slice      @a[5,3,7]       @h{qw(bar foo quux)}
    key/value pairs   %a[5,3,7]       %h{qw(bar foo quux)}

Domain Driven Design for modeling skills

Flawless Consulting for people skills

Presentation Patterns for communications skills


Goes well with the nicely formatted, media enhanced web version of the book here: http://aurellem.org/society-of-mind/

Hey, sorry for replying off topic.

We had a discussion on Firefly earlier, and I've just written the first small peice of Firefly documentation, introducing the new capability-based async/await inference: https://www.ahnfelt.net/async-await-inference-in-firefly/

You said to reach out by email, but it's not showing up for me on your profile. If you're interested, there's also a draft on part 2 here: https://www.ahnfelt.net/p/ce19cc5c-d18c-452f-86c3-85a920e748...

Thank you :)


In HN formatting you need newline for paragraph lists:

1. Be patient. No matter what.

2. Don’t badmouth: Assign responsibility, not blame. Say nothing of another you wouldn’t say to him.

3. Never assume the motives of others are, to them, less noble than yours are to you.

4. Expand your sense of the possible.

5. Don’t trouble yourself with matters you truly cannot change.

6. Expect no more of anyone than you can deliver yourself.

7. Tolerate ambiguity.

8. Laugh at yourself frequently.

9. Concern yourself with what is right rather than who is right.

10. Never forget that, no matter how certain, you might be wrong.

11. Give up blood sports.

12. Remember that your life belongs to others as well. Don’t risk it frivolously.

13. Never lie to anyone for any reason. (Lies of omission are sometimes exempt.)

14. Learn the needs of those around you and respect them.

15. Avoid the pursuit of happiness. Seek to define your mission and pursue that.

16. Reduce your use of the first personal pronoun.

17. Praise at least as often as you disparage.

18. Admit your errors freely and soon.

19. Become less suspicious of joy.

20. Understand humility.

21. Remember that love forgives everything.

22. Foster dignity.

23. Live memorably.

24. Love yourself.

25. Endure.


I recently did a short training on technical writing. One piece of advice stuck with me because it is a) counter intuitive and b) not followed widely enough. It can also be said to follow "write like you code", so here it goes:

When writing a technical document, stick to one word per concept. Say you are using "throughput" for how many requests per seconds a service handles in a given period of time. Don't rename it to "load" in the middle of a sentence. People will waste time wondering if it's the same concept or subtly different.

This is the opposite of what we've been taught at school (avoid repetition! use synonyms!), so it takes a bit of discipline to follow it.


I got an Android DAP for Christmas last year and found that I really like musicolet: https://krosbits.in/musicolet/

It works really well. It has the playlisting/queueing metaphors that I prefer and it has a decent interface for finding things or even playing all of the albums from a specific artist.


This is a neat idea - and not without precedent.

Installing each package into it's own directory, and maintaining symlinks into that directory structure, was an approach explored for large system installations in the 90s and 00s.

NIST depot, Xheir, CMU, and Nix all did this in varying ways. The advantage of this approach was that packages could be installed once on an NFS, scoped by version/configuration, and mounted into the filesystem of every work station on a campus. The workstations then just needed to maintain references to the files on the NFS for their particular software configurations. There is also some interesting exploration of composable configuration templates that allow workstations to inherit and extend a particular configuration (i.e. the physics department might have a base template).

Nix is easily the most successful in this space. It even uses something similar to the NFS approach! It can maintain a remote cache of observed configurations to pull from instead of redoing work.

Reading these LISA papers is a lot of fun. If you work in system administration (i.e. maintaining an Artifactory instance for your company) and have a day to spare - I'd very much recommend reading them, starting with the Nix paper!

To get you started:

Nix - https://www.usenix.org/conference/lisa-04/nix-safe-and-polic...

Depot Lite - https://www.usenix.org/conference/lisa-viii/depot-lite-mecha...


yes. if you wanted to annotate your genome you could “easily” do it on your brand new macbook (this is ram intensive, you probably need 32G). you’d need a reference genome, like https://www.nist.gov/programs-projects/genome-bottle

then you’d need a program like bwa http://bio-bwa.sourceforge.net/ to map your data.

then use https://samtools.github.io/bcftools/howtos/variant-calling.h... or something else to produce variants from the mapping results.

then compare your resultant vcf file to something like dbSNP: https://www.ncbi.nlm.nih.gov/snp/

at this point you can start generating a raw version of a 23andMe report.


Computer Systems: A Programmer’s Perspective.

Operating Systems: Three Easy Pieces.

Most important parts of my undergrad. Much more so than Algorithms or a anything mathematical.


I first encountered quaternions in the concept of abstract algebra. It's interesting to think of it as part of a spectrum starting with reals and continuing to octonions (and beyond). Interestingly with each extension we lose something:

Complex numbers (2-dimensional): No more ordering

Quaternions (4-dimensional): No more commutative multiplication

Octonions (8-dimensional): No more associative multiplication

Sedenions (16-dimensional): We lose the alternative property, i.e., with octonions its still true that x(xy) = (xx)y but with sedenions it is not.

We can continue this process indefinitely, although I know nothing of the characteristics of trigintaduonions or what comes later. I suspect that it might be the loss of the flexible identity next, a(ba) = (ab)a but I don't know for sure.


Thanks for posting this. I get why people do the victim-blaming thing; it lets them feel smart and superior, two feelings I have been known to enjoy.

But it's a fundamentally bad way to approach analyzing safety issues. For those who really want to dig in on the topic, I strongly recommend Dekker's "A Field Guide to Understanding 'Human Error'": https://www.amazon.com/gp/product/B00Q8XCSFI/ref=dbs_a_def_r...

It's nominally about examining airplane crashes. But he breaks down into great detail why the default analytical model is entirely inappropriate in ways that makes real safety improvement impossible. And it's the same set of analytical mistakes you see in a lot of blame-related behavior.


I finally broke down and got progressive lenses recently (one of my better decisions). However that means only a small portion of the monitor is in clear focus. So I also bought dedicated computer glasses (basically my regular distant prescription backed off by about 1 diopter) -- second best decision I made. Now I can finally go back to a non-zoomed screen.

I had the same objection to SQLite, and then I heard about Litestream, and it won me over.[0]

Litestream watches your SQLite database and then streams changes to a cloud storage provider (e.g., S3, Backblaze). You get the performance and simplicity of writing SQLite to the local filesystem, but it's syncing to the cloud. And the cool part is that you don't have to change any of your application code to do it - as far as your app is concerned, it's writing to a local SQLite file.

I wrote a little log uploading utility for my business that uses Litestream, and it's been fantastic.[1] It essentially carries around its data with it, so I can deploy my app to Heroku, blow away the instance and then launch it on fly.io, and it pops up with the exact same data.[2]

I'm currently in the process of rewriting an open-source AppEngine app to use SQLite + Litestream instead of Google Firestore.[2] It's such a relief to get away from all the complexity of GCP and Firestore and get back to simple SQLite.

[0] https://litestream.io/

[1] https://mtlynch.io/litestream/

[2] https://asciinema.org/a/I2HcYheYayeh7aHj23QSY9Vyf/embed?size...

[3] https://github.com/mtlynch/whatgotdone/pull/639


This is called the Expression Problem because, as you allude to in your example, it comes up all the time in compilers and interpreters when dealing with expressions.

If you're looking for an elegant solution (the first I've ever seen), check out the work by Kiselyov on tagless final encodings.

The core idea is to invert the interface. Instead of the expression types being objects that implement an interface consist ing of operations, or the expression types being enum values and the operations being total functions of those enums, the interface is the abstract logic of the operation, and the interface methods are the expression types.

Adding new operations is easy, it's a new interface implementation. Adding new expression types is easy, too. You create a new interface for the new type, and implement that interface for each compatible operation.

It does require typeclasses or traits or something like that, and if you want the full power you need language support for higher-kinded types. Look at the papers for full examples, but here's a teaser in pseudocode:

    interface BoolLogic {
        fn bool(bool) -> Self;
        fn if(Self, Self, Self) -> Self;
    }
    struct Eval {
        value: Anything
    }
    implement BoolLogic for Eval {
        fn bool(value) -> Eval {
            Eval { value }
        }
        fn if(condition, consequent, alternate) -> Eval {
            if (conditon.value)
                consequent
            else
                alternate
        }
    }
    struct Print {
        text: String
    }
    implement BoolLogic for Print {
        fn bool(value) -> Print {
            let text = value.toString()
            Print { text }
        }
        fn if(condition, consequent, alternate) Print {
            let text = "if (" +
                condition.text +
                ") {" +
                consequent.text +
                "} else {" +
                alternate.text +
                "}"
            Print { text }
        }
    }
    interface NumLogic {
        fn num(int) -> Self
        fn add(Self, Self) -> Self
        fn eq(Self, Self) -> Self
    }
    implement NumLogic for Eval {
        fn num(value) -> {
            Eval { value }
        }
        fn add(left, right) -> Self {
            let value = left.value + right.value
            Eval { value }
        }
        fn eq(left, right) -> Self {
            let value = left.value == right.value
            Eval { value }
        }
    }
    fn munge<T: BoolLogic + NumLogic>() -> T {
        T.if(T.eq(T.add(T.nu.(2), T.num(2)), T.num(4)), T.num(42), T.num(0))
    }
If you're curious I'd highly recommend reading and implementing the papers yourself. Have fun writing a typechecker, it's surprisingly easy! (with HKTs...)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: