Hacker Newsnew | past | comments | ask | show | jobs | submit | ptx's commentslogin

This is addressed by PEP 810 (explicit lazy imports) in Python 3.15 (currently in alpha): https://peps.python.org/pep-0810/

Yeah, but it requires code changes to matter

Could this be worked around by installing a single shell app which then loads other apps internally? I think it's possible to dynamically load Dalvik byte code in ART these days, right?

Obviously permissions would be a problem, as you can't update the app manifest, so there would either have to be one shell app per publisher (which would at least solve the problem of installing updates for their apps) or the shell would need its own internal system for managing permissions (like a browser does). Maybe it could also sandbox different apps from each other in different subprocesses, unless that needs root privileges, but maybe it's possible with Landlock?

Or we can always fall back to the "sweet solution" Steve Jobs offered us with the original iPhone, and just let the web browser be the shell.

Or implement everything as WeChat mini programs.


That would be very similar to LiveContainer for iOS [1]. I think that unsandboxed JIT is still possible as of Android 16, but Google has been cracking down on it.

[1] https://github.com/LiveContainer/LiveContainer


Better to follow the link to the technical details and just read those: https://cdn2.qualys.com/advisory/2026/03/17/snap-confine-sys...

The article linked in the submission is more verbose but less clear and half of it is an advertisement for their product.


I love that cheeky "oh btw, there's also another vulnerability in rust coreutils rewrite, but we aren't talking about that" paragraph

That's because it's not a vulnerability per se. They found a way to use `rm` as a gadget for their privilege escalation.

The core problem is that there's a world-writable directory that is processed by a program running as root.


It's a race condition that can be used as a primitive to achieve privilege escalation which makes it legitimate but even if it you couldn't use it for anything else but to trick the system into acting on a directory it didn't meant to it would still be a valid vulnerability (regardless of the application).

Claiming it's not a valid bug would be similar to claiming an infoleak isn't as well when it's one of the building blocks of modern exploitation.

I'm not trying to be an ass, I'm just trying to add a bit of context to ensure that the implication is well understood.


But this vulnerability is enabled by a very creative exploitation of the complicated bind mounting scheme used by snap-confine. Just reading about these mounts between /usr/lib to /tmp and back triggered my sense of a potential security vulnerability.

Slightly tangential but I never ended up switching to nix (or guix) precisely because I don't fully understand the theory behind why things were done the way they were done and where the security boundaries are supposed to lie relative to a "regular" distro. I found plenty of prescriptive documentation giving me recipes to do anything I might be interested in doing but not much in the way of design documents explaining the system itself.

I never asked around so maybe that's on me. Debian works just fine though and containers are (usually) simple enough for me to wrap my head around.

I didn't end up using Flatpak for the same reason.


When you sandbox your apps on debian already, there should be no security difference doing so on nixos, no?

The globally accessible /nix/store is frigthening, but read-only. Same applies to the nixos symlinks pointing there. This vulnerability was enabled by a writable /tmp and a root process reaching into it. This would be bad on debian and nixos.


I'm not suggesting the presence of a vulnerability just that I'm not comfortable switching to a complex system where I have little to no understanding of the logic behind the design. My remarks were nothing more than a tangential gripe.

This.

Might be worth updating the link.


What does "electronic signing documents" mean? Keys used for signing? Or merely some documents that were signed with electronic signing?

To the best of my understanding it means that a system made by CGI for digital signing of documents (as in: you get something like a PDF from a government agency and need to digitally sign it and send it back) has had its source code and/or some data belonging to it leaked.

Skatteverket, the Swedish tax authority, has been quoted in media as confirming that they use CGI's system for digital document signing but that none of their data nor that of any citizens has been leaked.

https://www.svt.se/nyheter/inrikes/uppgift-statlig-it-inform...

"One of the government agencies that uses CGI’s services is the Swedish Tax Agency, which was notified of the incident by the company. However, according to the Swedish Tax Agency, its users have nothing to worry about.

“Neither our data nor our users’ data has been leaked. It is a service we use for e-signatures that has been affected, but there is no data from us or our users there,” says Peder Sjölander, IT Director at the Swedish Tax Agency."


So if no data was leaked from the tax agency or from the users, then the leaked "digital signing documents" must have belonged to the only remaining party, which is CGI, so perhaps they were just some marketing documents about the benefits of their digital signing service?

The original phrasing from the attacker, from the website that put the data up for download/sale, was ”documents (for electronic signing)” which implies that they’re documents that would be signed in said system. I would take all of this with a large helping of salt though. CGI claims it’s not real production data anyway; maybe it is and maybe it’s not.

The best case scenario is in line with what CGI claims: these are lorem ipsum fake docs from an old git repo for a test instance of the system.


If that is case, then it would have been wrong from the beginning for any government to keep hold of the private keys for the signature on my citizen card.

Because in that case they can sign documents on my behalf without my permission. In a court case, it would be near impossible for me to prove that the government gave my private key to someone else and that it wasn't me signing an incriminating document.


I apparently didn't phrase that very well. If what is the case? I was trying to ask which case was the case, not trying to claim that something specific was the case.

I'm familiar with electronic signatures, and I know what documents are, but I have never heard the phrase "electronic signing documents" and don't know what that is supposed to mean. What kind of documents? Documents about signing, documents that were signed, documents in the sense that files containing keys could be considered documents, or what?


In Portugal we were early adopters for digital signatures on citizen cards.

You use the card reader, insert your gov-issued identification and can sign PDF papers which have legal validity since the private key from the citizen card was used.

Now imagine someone signing random legal documents with your ID for things like debts, opening companies or subscritions to whatever.


Signed documents can be as simple as an ID of the transaction, a statement in text, PII data that identify what you sign, or a store of larger PDF files for download and verification. We do not know. I base this on how signing works technically in Sweden.

CGI is not the only supplier of these services.


We might've lucked out here, there is some signature data on ID cards today and official _plans_ to make a government backed signing service, but practically _nobody_ uses them in practice to just revoking all those keys will be a minor issue.

Currently most Swede's use a private bank consortisum controlled ID solution for most logins and signatures.


The PR doesn't disclose that "an LLM did it", so maybe the project allowed a violation of their policy by mistake. I guess they could revert the commit if they happen to see the submitter's HN comment.

But search engines are not a good interface when you already know what you want and need to specify it exactly.

See for example the new Windows start menu compared to the old-school run dialog – if I directly run "notepad", then I get always Notepad; but if I search for "notepad" then, after quite a bit of chugging and loading and layout shifting, I might get Notepad or I might get something from Bing or something entirely different at different times.


Indeed, which is not all that different from LLM code generation, to be honest.

Although PowerShell borrows the syntax, it (as usual!) completely screws up the semantics. The examples in the docs [1] show first setting descriptor 2 to descriptor 1 and then setting descriptor 1 to a newly opened file, which of course is backwards and doesn't give the intended result in Unix; e.g. their example 1:

  dir C:\, fakepath 2>&1 > .\dir.log
Also, according to the same docs, the operators "now preserve the byte-stream data when redirecting output from a native command" starting with PowerShell 7.4, i.e. they presumably corrupted data in all previous versions, including version 5.1 that is still bundled with Windows. And it apparently still does so, mysteriously, "when redirecting stderr output to stdout".

[1] https://learn.microsoft.com/en-us/powershell/module/microsof...


IIRC PowerShell would convert your command's stream to your console encoding. I forget if this is according to how `chcp.com` was set or how `[Console]::OutputEncoding` was set (which is still a pain I feel in my bones for knowing today).

It's also not a file descriptor. It's a PowerShell Stream, of which there are five? you can redirect to that are similar to log levels.


But the rest of QBASIC is missing.


Well... Right here on the the very first website Tim Berners-Lee talks about how to build interactive web applications (here called "gateways"), albeit server-side rather than client-side: https://info.cern.ch/hypertext/WWW/FAQ/Server.html


Couldn't they simply switch to zip files? Those have an index and allow opening individual files within the archive without reading the whole thing.

Also, I don't understand how using XML makes for a brittle schema and how SQL would solve it. If clients choke on unexpected XML elements, they could also do a "SELECT *" in SQL and choke on unexpected columns. And the problem with people adding different attributes seems like just the thing XML namespaces was designed for.


It's a single XML file. Zip sounds like the worst of both worlds. You would need a new schema that had individual files at some level (probably at the "row level.") The article mentions SQLCipher which allows encrypting individual values separately with different keys. Using different keys for different parts of a kdbx sounds ridiculous, but I could totally imagine each row being encrypted with a compound key - a database-level key and a row-level key, or using PKI with a hardware token so that you don't need to decrypt the whole row to read a single field, and a passive observer with access to the machine's memory can't gain access to secrets the user didn't explicitly request.


ZIP files can have block-like relatives to the SQLite page. It could still be a single XML file and have piecewise encryption in a way that change saving doesn't require an entire file rewrite, just the blocks that changed and the updated "File Directory" at the end of the ZIP file.

Though there would be opportunity to use more of the ZIP "folder structure" especially for binary attachments and icons, it wouldn't necessarily be "required", especially not for a first pass.

(That said there are security benefits to whole file encryption over piecewise encryption and it should probably be an option whether or not you want in-place saves with piecewise encryption or whole file replacement with whole file encryption.)


A ZIP file with solid encryption (i.e., the archive is encrypted as a single whole) has all of the same tradeoffs as a KDBX file as far as incremental updates are concerned.

A ZIP file with incremental encryption (i.e., each file is individually encrypted as a separate item) has its own problems. Notably: the file names are exposed (though this can be mitigated), the file metadata is not authenticated, and the central directory is not authenticated. So sure, you can read that index, but you can't trust it, so what good is it doing? Also, to support incremental updates, you'd either have to keep all the old versions of a file around, or else remove them and end up rewriting most/all of the archive anyway. It's frankly just not a very good format.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: