I am working on a modal code editor project that you might find interesting then. It also operates on an AST directly, which is represented as UI nodes which closely resemble normal text layout. Email in profile if you’d like to give it a try and possibly give early feedback (still in early development).
Considering that a fleur-de-lis involves somewhat intricate curves, I think I'd be pretty happy with myself if I could get that task done in an hour.
Given a harness that allows the model to validate the result of its program visually, and given the models are capable of using this harness to self correct (which isn't yet consistently true), then you're in a situation where in that hour you are free to do some other work.
A dishwasher might take 3 hours to do for what a human could do in 30 minutes, but they're still very useful because the machine's labor is cheaper than human labor.
If you haven’t already, try going to Personalization settings, change tone to “Efficient”, and set Warm, Enthusiastic, and Emoji to “Less”. While not fundamentally solving the issue, I do prefer it over the baseline behavior, to the extent that I miss having a similar setting in Gemini.
I enjoyed reading it. Whether one believes the future will look like this fictional/hypothetical one, it encourages the reader to think about what would need to become true for this future to be plausible.
That has not been established in the courts, at least not precisely enough to assert that for sure this project isn’t copyrightable.
“ But the decision does raise the question of how much human input is necessary to qualify the user of an AI system as the “author” of a generated work. While that question was not before the court, the court’s dicta suggests that some amount of human input into a generative AI tool could render the relevant human an author of the resulting output.”
“Thaler did not address how much human authorship is necessary to make a work generated using AI tools copyrightable. The impact of this unaddressed issue is worth underscoring.”
I wish Taiwan’s reactors were never shut down in the first place, and I hope Taiwan can hold out long enough to get it started back up again. It’s a step towards being able to withstand a blockade (Taiwan lacks oil, gas, and goal resources, so it relies on imports). If PRC chose to attack a nuclear power plant, it might give the necessary pressure for international intervention.
For what it’s worth, I’ve personally walked around the nuclear containment area on Orchid island and swam in the waters around it. It’s a well managed and nice place.
Hasn't Russia chosen to attack a nuclear power plant in their recent aggression? Unless you're thinking of a more destructive kind of attack, I probably shouldn't be counting on international intervention.
I don’t mean to suggest it alone would tip the scales. And I agree the hope for international intervention is dimmer than it ever has been. But it would be one thing on the scales, as it has been in Ukraine as well. While there has not been direct military intervention in Ukraine, the support that has been provided relies on political popularity, and Russia’s endangering of Zaporizhzhia has contributed to the disdain of and attention towards Russia’s invention.
If the machine can decide how to train itself (adjust weights) when faced with a type of problem it hasn’t seen before, then I don’t think that would go against the spirit of general intelligence. I think that’s basically what humans do when they decide to get better at something, they figure out how to practice that task until they get better at it.
In-context learning is a very different problem from regular prediction. It is quite simple to fit a stationary solution to noisy data, that's just a matter of tuning some parameters with fairly even gradients. In-context learning implies you're essentially learning a mesa-optimizer for the class of problems you're facing, which in the form of transformers means essentially means fitting something not that far from a differentiable Turing machine with no inductive biases.
reply