Hacker Newsnew | past | comments | ask | show | jobs | submit | oldgeezr's commentslogin

Just curious, is there a good site/book/etc to learn how the modern internet actually works? As a lowly programmer, I have a good understanding of network communications, and some knowledge of things like routing protocols, but I'm completely lost when it comes to understanding how the modern internet actually functions. Thanks!


I'd normally recommend books like Google's SRE one, but at least in this case it glosses over the detail of where GFEs tend to live:

https://landing.google.com/sre/book/chapters/production-envi...

It used to be the case that they were mostly in POPs, but I think that with Maglev (https://research.google.com/pubs/pub44824.html) they can live in core clusters, too. Other Google sources go into more detail, e.g.

https://medium.com/@duhroach/profiling-gcps-load-balancers-9...

https://www.slideshare.net/MichelleHolley1/google-cloud-netw...

Back to your question, I'm not sure there is one good place to look up these things, but presentations/papers by companies like Google and Facebook are probably still your best bet. Stuff coming straight out of GCP teams will be a little more enthusiastic in tone, but that's easy to tune out. :-)

Another good example is Facebook's Ben Maurer and his Fail at Scale talk, which discusses a lot of details that are necessary for modern internet services, such as queuing, session/application-layer congestion control, canarying, advanced monitoring, etc. https://queue.acm.org/detail.cfm?id=2839461


Tubes by Andrew Blum does a great job at introducing how the infrastructure is layed put and operates to a certain degree, https://www.amazon.com/Tubes-Journey-Internet-Andrew-Blum/dp...

That said, I would love some more in-depth books on the topic.


A friend of mine is playing with neural networks and training them to play reversi. He's working with a lot of matrices so he tried AVX extensions and CUDA. AVX runs circles around CUDA, probably because of the setup time of moving things back and forth to the GPU. Also, CUDA can be a big pain in the ass to get working.


There are many answers to your question but for starters:

- There is no perfect security. There is a notion of raising the expense of piracy to a level that it effectively does not matter.

- IIRC, for instance, rooted Android loses support for... Widevine? So you can't really use Netflix on a rooted device where you could easily steal frames from the video buffer. Yeah, you can rig up a nice camera system and record analog off the display. Nothing they can do about that. They also may insert watermarks to let them know who recorded it.


Oh gee I guess I'm a wizard.

Lots of systems/embedded programmers roll their eyes at this kind of talk. Threads aren't really that hard.

Event queues do have benefits in certain situations. They pair nicely with state machines. You can easily end up in callback hell though, and it is often difficult to integrate some long-running, atomic tasks into your event loop. You end up doing things like having a thread pool, at which point you have to wonder why you stopped using threads in the first place. Oftentimes a threaded approach is a cleaner approach. Just get the locking granularity right - it's not that difficult.


Systems/embedded programmers roll their eyes at this kind of talk because they usually control (or at least have visibility into) all of the code that goes into their stack. Threads aren't that hard under these conditions.

The main problem with threads is that they're non-composable: the set of locks that a thread holds is basically an implicit dynamically-scoped global variable that can affect the correctness of the program. If you call into an opaque third-party library, you have no idea what locks it may take. If it then invokes a callback into your own code, and you then call back into the library, there is a good chance that your callback will block on some lock that a framework thread holds, that framework thread will block on a lock you hold, and then the code that releases that lock will never execute. Deadlock.

If you control all of the code in your project, this does not affect you: define an order in which locks must be acquired and released and stick to it. If all of your dependencies have no shared data and never acquire locks themselves, this does not affect you (and indeed, this is recommended best practice for reusable libraries). If you never call back into third-party libraries from callbacks, this does not affect you, but it severely limits the set of programs you can write. If all of your dependencies thoroughly document the locks they take and in which order, this affects you but you can at least work around the problem areas and avoid surprise deadlocks.

Most application developers do not work under conditions where any of these are true, let alone all of them. Application development today largely consists of cobbling together third-party libraries and frameworks, many of which are undocumented, many of which are thread-unsafe, and many of which spawn their own threads and invoke callbacks on an arbitrary thread.


> the set of locks that a thread holds is basically an implicit dynamically-scoped global variable that can affect the correctness of the program

One technique to get a handle on this situation is making the mutexes actual explicit global variables.

"But global variables are bad" they will say. Yeah. And it reflects the reality.

"But I need a separate mutex for each object instance like they recommended in 1995 https://docs.oracle.com/javase/tutorial/essential/concurrenc... " they will say. Have fun with that.

Python and early Linux kernels use a single global mutex for access to all shared mutable state. In my experience, this is an entirely reasonable design decision for a huge majority of applications.


Well, you don't let lock semantics fall outside of a library frontiers. That means you do two things; first you do a global organization of the threads (no OOP-like patterns), second you export threads to the outside world in a hierarchy that exactly reflects the code hierarchy (easiest if you export a single thread).

There are some patterns that are safe as long as you implement them correctly. The patters that are good for IO are among the simplest, so that's where the GP was coming from. But it's not viable because he has full control of the code, it's viable because his problem domain has good options.


I agree with both of you. I don't think threads are THAT hard to work with. It definitely takes some experience to do it well and quite a bit more documentation to maintain the expected invariants. When libraries can get into a tangle, it's usually code that's in house and better ripped out. Easier said than done I know.

Open libraries tend to either just be single threaded abd should be used as such or explicitly thread-safe.

Disclaimer: Used threads in Java not much in C. Love me some Jsr-133 volatiles. Still confused with the Java 9 memory model updates.


Quite. I'm so fed up of the "threads are bad" argument (in my mind it's been commonplace since about 2008, so it's interesting to see this piece from 1995).

I've made use of threads at some point in almost every single job of any duration. They're one of many problem solving tools and if you understand them, which isn't particularly difficult, at some point you're bound to run into a problem that's a natural fit for a multi-threaded solution.

Nowadays, especially with no shared state, they're super-easy to use on many platforms. Take, for example, the parallel support in the .NET framework, along with functionality that supports debugging multi-threaded apps in Visual Studio like the ability to freeze threads.

If you do need to share state, which is when locking becomes essential, most languages and platforms have easy to use constructs to help you do this without much in the way of drama.

I'm not suggesting for a minute that there are no dangers, but there are plenty of dangers with other programming techniques, as well as lurking in any system of sufficient complexity, so I don't really understand why threads garner so much hate.


> which is when locking becomes essential, most languages and platforms have easy to use constructs to help you do this without much in the way of drama.

This is actually a problem. It is very easy to just slap locks around which, depending on your workload, can cause the threads to be blocked waiting for work.

I have seen many designs that used threads "for performance", but had so many locks in place that a single threads would actually perform similarly, with much less code complexity.

Once you get past a couple of locks in your code, it starts to smell.

Just because you can do Thread.New in your favorite language, doesn't mean you are using them correctly or efficiently.


It reminds me of a critique of threads in The Art of Unix Programming (available at http://www.catb.org/esr/writings/taoup/html/ch07s03.html#id2...). And now that I look it up, it actually cites the Ousterhout paper! This suspicion of threading was one of the few parts of that book I found unconvincing, personally, but it's another witness that they have worried some people for a long time.


A lot of work in programming languages over the past decade has been devoted to providing a safety net and guard rails for avoiding the pitfalls of thread-based concurrency. See in particular Rust and Go. It's still quite possible to corrupt data and get deadlocks, but our languages have come a long way to making it harder.

But the point of this article is to say if we ditch the notion of threads entirely and go with this other thing, we won't need safety nets anymore because it will be impossible to deadlock and corrupt data (as opposed to less likely).


I love go and goroutines, but besides the ability to select() over channels I wouldn't say go has done much to help get concurrency _right_. Mostly just easier. Even Java has a few more tools for healthy concurrency.

I don't blame go because I'm not convinced threads are all that bad, but having more concurrent data structures would be great.


Structured threads aren't that hard (e.g. task-based systems, thread pools).

Unmaintanable raw-pthread messes are a nightmare sequel from the director of Endless GOTOs.


Yes, small careful software teams can make threads work. However, if you start to work with physicists, mathematicians, electrical engineers, and so on who are incredibly smart in their own areas, but who don't have or even value a skill in software, you'll discover they make a real mess out of threaded programs in a way that doesn't happen with separate single-threaded processes.


If that's your audience, then you should give them a library/framework/language that hides all the complexity. I'm currently spending a lot of time working on Python Tornado stuff on an embedded device, and I can say that the lack of threads does not substantially reduce the number of ways you can screw things up.


They aren't my audience... they're my coworkers. Sometimes I get to pick how we do a project and sometimes I'm there to help them with their project. If they chose to use threads, I generally try to escape quickly.

No experience or comment on Python/Tornado. We don't really do a lot of web stuff.

But sure, people can screw things up in lots of ways. However, once a threaded program is screwed up, you really only fix it by starting a new version - it's near impossible to incrementally fix race conditions and dead locks - you can't reliably repeat the bug to debug it. Bugs in non-threaded code can at least be tracked down one by one.


There's even a course from GA Tech (Intro to Operating Systems, publicly available on Udacity) that covers how to use threads safely and sanely. I went in knowing nothing but terror from a failed experiment in naive multithreading and came out wanting to apply threads to everything. Maybe not quite the right approach, but I at least feel vastly more confident with keeping them manageable. Like you say, managing how and when to lock is the key.


> covers how to use threads safely and sanely.

It takes a lot less expertise to make events as fast as threads as it takes to make threads as safe as events. I don't know about the rest of you, but I personally do not have a brain that can become an expert on every topic.


If your state machine event handlers are non-blocking, then the thread pool is the same size as the number of available hyperthreads. That's not hard either. And, as observed elsewhere, it becomes impossible to screw up. That's a powerful property, and makes it possible for non-embedded 'normal' folks to write correct code in this space.


I have permanent visual disturbances and suffered long-term panic attacks and anxiety as a result of psychedelic use. Things got better over time, but not completely back to normal. I'm not advocating for the continuation of our current legal strategy, but I feel like these drugs need to come with strong cultural wisdom about their appropriate use and potential for abuse.

Anyone who's been in a psychedelic community for long(stoners, deadheads, etc) knows at least a few people who have been temporarily(1 day - 1 year) or permanently fried from having experienced one or many trips. I feel like there is often a backlash against our regressive legal approach which tends to accentuate the positives and diminish the negatives but really we need information. These drugs can permanently change your personality in some ways that will generally not be seen as positive. Yes, they can have many good effects but that comes with a small risk of extreme side effects.


Are these people "fried" because they used those substances, or because of how they used them, or how they viewed, integrated (or not-integrated) their experiences?

A visual distortion is a good example. When experiencing a visual distortion, it is possible to feel anxious about the appearance of such a distortion, or instead one could be indifferent to it, or even enjoy and welcome it. It need not be negative.

More research is certainly needed on the negative experiences people sometimes have with psychedelics (so far, most of the research has focused on the positives), but I have a strong feeling that how people approach their experiences, what they expect to happen, and how they interpret what happens plays a very critical role on whether the experience is positive or negative for them. This is the old "set" (or "mindset") part of "set and setting" that's critically important for their constructive use.

Also, it may be possible to work through any anxiety or negative effects one experience with a therapist. Panic attacks and anxiety, for instance, are things that therapists tend to be actually really good at treating.


Visual disturbances affected my ability to read books for years. I am no longer a speed reader thanks to psychedelics. For years after my first big trip, walls breathed and floor tiles became 3d. It is hard to be indifferent to such things.

My set going into the trip was that I was expecting to have a great time with friends. I was young and very positive towards psychedelics. I did not have a 'bad trip'. The effects came later.


Depends on the distortion. A visual distortion means that you aren't seeing things as they really are. That could be problematic for your ability to drive, or to ride a bike, or any number of other activities. You could be indifferent to that, but it could still be a genuine problem for your ability to function.


Even without visual distortions, none of us ever see things as they are :)

As a fun fact, you can drive on LSD if you’re experienced with psychedelics and know you can trust yourself, at least under doses <150ug. Absolutely not recommending it to anyone, but the visuals don’t really get in the way. (I’ve only driven on acid once, when I witnessed a really, really bad skateboarding accident and had to rush a stranger to the ER).

IME most people handle psychs fine, some can handle practically anything, and some will lose connection to reality and basically forget how rational thought works on an eighth of mushrooms. There’s little indicators that have given me an idea of how someone might react, but you never know for sure. Thus why the advice to have a trip sitter is always given - personally I’ve never needed a trip sitter, but I’ve seen people who absolutely did.


"you can drive on LSD if you’re experienced with psychedelics and know you can trust yourself, at least under doses <150ug. ... Absolutely not recommending it to anyone, but the visuals don’t really get in the way."

It's not just the visuals that are the problem. Temporal distortions are common -- you can feel like time has stopped or slowed down or sped up or gone backwards. Also, you might get disoriented or confused -- not understanding where you are or what you're doing or how a steering wheel works while you're on the freeway is probably not the wisest or safest thing to subject yourself or others on the road to. Or you might get irresistibly entranced by a fleck of paint on your dashboard, which could look like a whole animated world to you, while you should be keeping your eyes on the road, etc, etc, etc.

You know how there's a warning not to drive or operate heavy machinery on some medications? Well, that warning should be on psychedelics, only times 1000.


Yes, I experienced being 'fried' for a couple of years after excessive MDMA and LSD use. My experience mirrors yours.


MDMA is neurotoxic though. It's strange if only hallucinogens triggered problems in the other poster. LSD is not as far as I read.


Out of curiosity do you have colorblindness?


Nope(sorry I'm OP but using my home account). I'm nearsighted. I was eventually diagnosed as slightly bipolar(agitated depressed), but that was after my experience with psychedelics. FWIW, I believe the life changing experience was caused by something other than LSD. A friend got tie-dye blotter from the Grateful Dead parking lot around '90. I took two tabs. It lasted around 21 hours, which is odd for LSD. Definite LSD-like effects(time dilation, huge trails, tunnel-vision in the early stages...). It was blotter though, so it limits the number of RCs it could have been.

Also I ended up with essential tremor, ticks, and restless leg syndrome. That runs in my family though. I probably have mutations in my dopamine system.


> two years ago

Ok, so let us know how it goes in 20 years.


I had LASEK 13 years ago and it still goes smoothly. I'm now 52 so I have presbyopia but, beside that, I still have almost perfect vision. The only drawback is that, as other also said, the pain in the immediate after surgery was horrible.


I can only speak from my experience with Amazon. I sent a resume in for a job at Lab126 since I'm an embedded guy with an EE background, no CS except 21 years doing embedded software. I never heard from Lab126, but the AWS team got back to me. After failing to pass all the test vectors of the coding test, they gave me a phone interview. I did OK at best. They flew me up for an interview and proceeded to ask me CS algorithm questions on a whiteboard for 6 hours. I didn't get the job(and probably would have turned it down otherwise, since it involved being on-call), but it surprised me that their screening process was so bad. Nowhere on my resume did I indicate having any knowledge of CS algorithms, yet it was seemingly vital for the role. Strangely enough, I did study up on virtualization implementations(KVM/Xen), but no one asked me a single question about that(despite it being listed as vital on the job description).


Working on new projects every couple of weeks would be fun.

Learning a new set of APIs, version control strategies, team members, platform peculiarities, build systems, bug reporting and issue tracking software, groupware... fuck that. I could sort of do it when I was in my early 30s but not now.


It's an interesting way of looking things. Depression is as complex as the brain which produces it, however. This may be a part of it, but there are many pieces in that puzzle.


> They’re particularly ideal for embedded systems.

While I agree, I think this promotes the idea that they're only good for embedded systems, especially small ones. State machines are not always the right tool for the job... only most of the time :)

State machines(or DFAs for you CS guys) are used all over. For instance, AFAIK, grep is generally implemented with a state-machine, albeit one which is dynamically constructed at run-time(which is a pretty cool concept if you ask me).


True! Statecharts are handy in lots of domains. Embedded happens to be a prime target due to the memory constraints combined with often limited behavioral requirements.

IMHO, most websites / UI's implement half of a poorly understood state machine (to roughly paraphrase the Common Lisp quip). Qt apparently also has an interesting statechart engine as well: http://blog.qt.io/blog/2017/01/23/qt-scxml-state-chart-suppo...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: