LinkedIn Mobile Moved from Rails to Node: 27 Servers Cut and Up to 20x Faster | Hacker News




If you are thinking about using node.js for this reason[1] on most sites, you are optimizing poorly. LinkedIn didn't worry about this until after they were a public company.

If Python/Djando or Ruby/Rails can get your app out the door and into customer hands faster, it is almost always the right thing to use.

1. There are certainly other, very valid, technical reasons for choosing node.js over other technologies early. But let those reasons be about the problems you are solving today, not the ones you might need to worry about when you have 50 servers to deal with.


I'm sure LinkedIn could have cut servers without switching to node. Take your first, crappy implementation and rewrite it in the same language and you'll probably still see at least 10x improvement, if not 20.


This times a thousand. When you first write the software you're basing it on expectations, no matter how well you plan, but when you rewrite it you're coming at it with real world knowledge of the pain points so of course it's going to be better in terms of performance.


"This times a thousand" would be an improvement by a factor of 10000. The improvement should definitely not be that large.


Thanks for the explanation, skeletonjelly. I think the reason I misunderstood is that I don't hang out on the Internet enough: a friend explained to me that "This." and "this times a thousand" are Internet slang that mean "I agree" (the second, presumably means something like "On a scale from 1 to infinity my agreement level is 1000." :). As far I know these expressions aren't used verbally, which would explain why I took it literally.


It means repeat it a thousand times for emphasis.


You're getting downvoted because the poster was referring to the gravity/importance of the message.


We don't all have English as our first language, and there's no problem with that. I'd have replied exactly as you did if I thought he literally meant "this times a thousand"


I included an extra bit of improvement to deal with inflation rates.


LOL, apparently sarcasm and witty banter is not HN forte :P


I disagree. When you change state management to client side, you are making a fundamental architecture shift that is significant enough to remove a lot of server-side overhead. What makes you think refactoring code is going to give you a 10x improvement in efficiency? If your code is that bad, you should get rid of the developers along with the code.


>I disagree. When you change state management to client side, you are making a fundamental architecture shift that is significant enough to remove a lot of server-side overhead.

In most cases, you're not. You're mostly making a big logic spaghetti mess on both the client and the server AND make your pages load slower, especially for the initial load (client performance, js loading times, etc).

>What makes you think refactoring code is going to give you a 10x improvement in efficiency?

Because even correctly implementing just the caching layer, with nested/micro-caching, can give you up to 1000x improvement in efficiency in the first place.


It's true that moving more logic and state management to the client side often increase page load times - but it also allows you (potentially) to make certain interactions much faster.

When you have real data on the client instead of just a bunch of markup, you can be a lot smarter about how and when you make additional AJAX requests. Optimistic updates can make a huge difference in perceived performance.


>> Take your first, crappy implementation and rewrite it in the same language and you'll probably still see at least 10x improvement.

You will probably will get the same mess. There is plenty of literature about that. I.e: A complete rewrite is what killed Netscape


As bad as Netscape 4 was, what killed the company was their decision to charge for the browser while their monopolitic competidor was giving it away for free.


When a competitor undercuts your pricing with free, not doing anything for three years is unlikely to be the optimal response...


Actually the competitor deliberately did it in order to kill Netscape's business model. This also affected Opera too.


I think this taking the wrong lesson from Netscape. If you do a greenfield re-write of a multi-million line application you probably are going to have a bad time, but people successfully re-write smaller applications or portions of an application all the time. Since this was just a portion of the total functionality of LinkedIn, they incurred much less risk than a ground up rewrite would have.


Don't rewrite the whole thing at once! Cut off logical sections and fix them one at a time.


>You will probably will get the same mess. There is plenty of literature about that. I.e: A complete rewrite is what killed Netscape

That's a meme started by a Joel Spolsky article, and maybe an allusion to the "second system effect" (which is about something else altogether). Hardly "plenty of literature about that".

Actually, the complete rewrite might killed Netscape, but it saved Mozilla and Firefox. And Netscape was a multi-million line web browser engine, with a javascript interpreter, a full mail client and a wysiwyg editor thrown in. And multi-platform in C/C++ to boot.

That is, something an order of magnitude more difficult than LinkedIn or 99% of web properties out there.

There have been TONS of successful rewrites. Especially in the web space, it's almost trivial to rewrite your webapp or parts of it. To name but a few:

Twitter, the new Digg, SoundCloud, Basecamp, etc etc.


It's got a slightly longer history than just being a Joel Spolsky meme: http://en.wikipedia.org/wiki/Second-system_effect


Noticed how I already wrote about that? To quote:

"That's a meme started by a Joel Spolsky article, and maybe an allusion to the "second system effect" (which is about something else altogether)."

That said, the "second system effect" is not about merely rewriting risks, but especially about architecture and design choices. From the very wikipedia article:

"""People who have designed something only once before, try to do all the things they "did not get to do last time," loading the project up with all the things they put off while making version one, even if most of them should be put off in version two as well."""

That is, if you design your rewrite _without_ wanting to build a bigger, more involved product, but merely a cleaner and more cleanly made product, this does not apply.

Another quote from the very article: """The second-system effect refers to the tendency of small, elegant, and successful systems to have elephantine, feature-laden monstrosities as their successors."""

This is not the case we refer to here. The Netscape got by version 4 had gotten an ungodly mess (and even before that), not a "small, elegant" system. And Mozilla/Firefox, the rewrite, is cleaner and more elegant than Netscape was.

Consider a 100 line Python script. People can rewrite it from scratch in 100 different ways, while improving upon it with no problem. At some point of complexity this stops being true, but Brooks was talking about huge projects, built by enormous teams, like OS/360 and such. Not some 20K - 100K line web project.


the complete rewrite might killed Netscape, but it saved Mozilla and Firefox.

What do you mean?


I mean that while Netscape, the company, succumbed while waiting to put out their new competitive browser, we now have the Mozilla Foundation and Firefox.

We wouldn't have those if it wasn't for the rewrite. The old code was a mess even before version 4 (by its developers own admission), and it could never get to the point of competing in the engine space ever again.

That is, with the old Netscape rendering engine it would not be possible to extend it to compete in the modern HTML5/Canvas/GPU acceleration/CSS3/add-ons/separate contexts for each tab/etc era.


And since we are talking here more about technology, and not business models, we should more or less regard the Netscape rewrite as a success story. They did NOT produce a system that was over-engineered or that failed to perform well. Indeed it took a tremendous amount of market share. Now of course they complain about the code again but the situation is far from desperate.


Is moving from Rails to Node not a complete rewrite?


It doesn't count as a rewrite when the rewrite is in a language that's cooler than the original version.


They just wrote a Ruby interpreter for Node.js. Node.js is just that fast.


Where can I see this?


You'll have to wait until April 1.


I'm with you on this one. I'm currently in the process of re-platforming and I'm noticing considerable gains just from revising the way certain process are done. You find a lot of "wtf was I thinking".


My thoughts exactly -- going from 30 to 3 servers is no joke and it couldn't just be because of a move to node.js


Node has some pretty unique benefits, I save a good 1000ドル/month from switching from .NET on dedicateds to NodeJS on a PaaS (Heroku), and that was after 2 - 3 years of writing and rewriting and optimizing the .NET stuff.

Biggest improvements came from persistant connections to redis/mongodb and polling for updated information independently of requests so there was no cached-or-fetch shenanigans at all in some areas.


>> " I save a good 1000ドル/month from switching from .NET on dedicateds to NodeJS on a PaaS (Heroku)"

I'm really puzzled by this. A single dedicated large .NET box (16GB RAM and 6 Core Processor, 1TB raid, etc.) runs 150ドル a month. When I was looking at PaaS, Heroku came in more than five times that much for similar capability. 1,000ドル a month gets me SIX of these dedicated boxes that can be tied together as needed.

Why was your setup so much more for dedicated that you could _save_ that much, much less why you would be spending that much in the first place?


What were the benefits you saw in switching from .NET to Node? Was it in basic code structure/complexity or performance?

I'd actually be very surprised if NodeJS running on Heroku (which is built on EC2) performs better than compiled .NET code running on dedicated Windows hardware.


The major benefits were persistant connections and background fetching of data - a lot of my requests serve data directly from local memory instead of hitting local or shared caches and databases.

The equivalent in .NET I guess would be BackgroundWorkers that are independently prefetching the data required most of the time but I could never get them to Just Work.

Specifically for my use case 99+ percent of requests receive some data, do some light manipulations and then push the data to redis about 8,000 to 12,000 times a second. With .NET I could only push to locally running software instances because anything remote couldn't keep up with the connection volume (without throwing even more hardware at the problem).


Interesting. Thanks for the answer. I'm a little surprised that .NET can't juggle network connections very well but in retrospect, I probably shouldn't be.


I think it's just whatever .NET's doing to pool them that has some tiny bit of overhead that doesn't matter most of the time.


Interesting.. how often do you pull from the db's? Do you think about it as a write-through cache?


Every 30 or 60 seconds + the data's timestamped so it's usually a pretty minimal refresh.

It's really just like using any memcached or the built in .net caching rather than a write through one, new data reaches each dyno either via the periodic refresh or when it tries to create that record and finds it already exists. Writes are done immediately and the caches don't get updated because there's 8 - 16 dynos and why bother updating the single dyno that created it.

I did originally use redis pub/sub to push out updates to everything but I ended up removing it because it was unnecessary.

Here's some example code, it pre-fetches all the leaderboards (not scores) every 30 seconds: http://pastebin.com/asq6eExu

Higher up in the same script is the api for the leaderboard data with stuff like: http://pastebin.com/gsfvsZsv


This article actually gives details of their first implementation which suggest that they could cut servers without switching to node :

http://ikaisays.com/2012/10/04/clearing-up-some-things-about...


Yeah as the article notes the old version used HTML and the new version uses binary blobs.

This is hardly surprising.


To be fair, Node.js isn't exactly harder to work with than Django or Rails. I myself had to choose between these about a year ago and despite the asynchronous paradigm I still found it relatively easy to start using in comparison to Rails and Django.


Node.js is extremely hard to work with compared to Rails and moderately hard to work with compared to Django.

The Node.js ecosystem has a lot of potential, but the variety of off-the-shelf add-ons is severely limited compared to either of those more mature frameworks.

If you think Node.js is easy, you've never really experimented enough to understand what makes Rails so effortless. It's a lot easier to produce a production-ready application with Rails than it is with Node.js as it is today.

In four years, as Rails starts to add less critical features, Node.js may well have caught up.


I used to think this until I built an app with node... every time I got to a point where I wanted a library or framework, a mature, well-supported one existed. Yes I had a bit of a learning curve since it was my first node app, but I was shocked at how smoothly it went and this was over a year ago.


When you want a 100% solution and Node gives you a 95% one that 5% can be a deal-breaker. Node is nearly there, which is frustrating because it has so much potential, but it's just not yet.

I think in the long run Node will beat the pants off of Rails but it's going to take an enormous amount of work to make that happen.


What's the missing 5%? I'm personally partial to sinatra so I might be missing it there also :)


Sinatra is a lot closer to Node and Django than Rails in terms of philosophy. It's more minimalist, where you must assemble a lot of your environment rather than be issued one by default.

The missing 5% is mostly things that make your development process more effortless.

I've found that it's easy to get a first cut of an application out inside of two weeks with Rails, but you will probably need more time or lower expectations when working with something more limited like Django or Sinatra.

Since Rails imposes a lot of conventions, applications are easier to organize if you follow the rules. Sinatra is far more open to interpretation, so if you're not disciplined it can turn in to a bit of a mess.

I'm a big fan of the DRY principle and it's much easier to apply within Rails than in other environments. A lot of this relates to how Ruby is a lot easier to meta-program than other languages.


Node.js is extremely hard to work with compared to Rails and moderately hard to work with compared to Django.

Pro tip: Don't put this on your resume.


Coming from Rails, Node is like opening a toolbox and finding it has a screwdriver, a wrench, and a hammer. Somehow you're expected to build things with that.

For those that love to build things from the ground up or to carve out new solutions, Node is a great place to be. I think it's got enormous potential and is biggest opportunity since Python and Ruby took off around years ago.

Just don't think because you can create the same sorts of apps with Node that it's as easy.

Ruby on Rails even three years ago was laughable compared to today's toolset. Node is catching up quickly, covering ground faster than Ruby ever did, but still lagging.


made me chuckle


Why not write your code using Node.js because you actually enjoy using JavaScript? Or how about the convenience of writing in one language in your stack? Or how about the simplicity provided by not having to worry about threading per process?

If you already know Python or Ruby, then ya, using Node.js would be silly. But if you have to pick one because you're equally familiar with all of the languages, then Node.js isn't a bad choice at all.


You'll note I explicitly said there are reasons to choose it. And if you choose it for those reasons, great. Just don't choose it because it will solve your Mazzerati problem. Choose it for your problems today.


" because you actually enjoy using JavaScript? "

You sick bastard!


What I saw as a plus was that they combined front/back end teams into a single unit.


Am I the only person who doesn't believe this story? Because really, the whole reason we have front-end/back-end specialists is because front-end javascript ninjas can't learn RUBY??!?


It's believable to me. The whole reason we have front-end/back-end specialist isn't because front-end js people can't learn Ruby but rather it's the same reason why front-end/back-end is separated code-wise.

Depending on how they're using node they could very well keep this segregation. But some implementations with node have server/client sharing code. I know I've seen examples of this. I imagine that's how they're doing things.


I don't think it's about the language - it's about knowledge. A backend specialist can learn Javascript but may not know the nuances of working with the DOM, or optimizing for pagespeed, or the important front end libraries. Similarly, a front-end specialist may not have all the experience necessary to diagnose problems in a deployment. That's one reason I don't buy the "front end developers can now do backend development because of node.js" argument. Any sufficient competent backend developer can start writing Python code in a day. What that developer doesn't get in a day is understanding deployments, popular Python libraries, the ecosystem, etc.


I'm not saying they've removed specialists. I believe they still exist. But in switching to node, because client/server shares a language, they're now capable of sharing code as well. Which is what I imagine they are doing, which is why it's logical that they've merged teams. This doesn't rule out that front/back-end specialists are removed and become well-rounded rather, they're just working together more.


After reading the article, the fact that they cut servers was just one of the advantages they got from using node.js


I would spend a little extra time upfront and save many many expensive rewrites latter on. Use C from the start, you won't need to rewrite just scale with hardware.

This argument that using Ruby (or any language that makes programming easy but runs like treacle) to get something out the door a few months earlier is bullshit, I would rather release something a few months later that I didn't have to totally rewrite down the line 10 times.


Said with true engineering "vision".

The reason why "getting it out the door" is so important is because nobody behind the door generally has any idea about what people will really pay for. By making it fast (read: cheap) to make, if you find that you don't get it right, you can try again, and again, and again.

Now, you can argue that "they shouldn't be releasing without knowing they will be successful", but if they have the ability to see the future, they should just take that VC money and put it in the market. People like Steve Blank and Eric Ries have a lot of evidence how running most startups (software startups particularly) using a "build it and they will come" attitude is ripe with failure.

Now, if you are just building for fun, that's different. I knew a guy building a Dropbox clone in C. More power to him (though I wouldn't have touched it, because I think it is too hard to get security right all the time in a language like C).


I can turn around a project in a few months easily, and all my _running_ software will be C or C++. The secret is leverage and code generation (Where I will use a scripting language)


This would only make sense if you're working for an established company that can burn a few extra months early on. For a startup the runway might be too short. In this case you have no idea if your product is even going to exist a year later so it seems a bit premature to be worrying about the possibility of rewrites.


My experience is a software always takes much longer than expected, if your startup will fail if you run a few months over then you may as well go and place all your money on black at the casino. You need to get into a position were you can run 1 year over and still survive i.e. have other income while working on your startup.


But the odds of the thing you release actually getting used are low. So make it fast, and if it's a hit, then deal with that.


Wow!! your comment encapsulates the thinking of every wannabe startup loser that ends up smoking crack on some beach in San Francisco"

Will making it faster increase the chances of more people using it? Also this attitude of building shit products (Yes software is a product) because..well the chances are it will fail.. is precisely why so many startups fail. This is why you get one crapola startup after another........

"Show HN: Crap.ly -- we built this in 3 weeks using Ruby, it will scale up to 20 users before craping out, its a twitter scraper that connects to app.net and diaspora showing how much Money your Kick-starter project has made and includes quotes from Paul Graham about how to build a successful startup, because he has built so many"

Imagine Linus had built Linux using fucking node.js


Imagine your shock when you build your craptastic app in C++ and wonder why it's not any faster than a scripted app. I'll give you a hint ahead of time: most bottlenecks aren't processing time, they're DB or IO limited.


Not strictly true. In the bigger companies, everyone knows that DBs (which is really just a special case of I/O) and I/O are slow (we tend not to hire them if they don't - one of my favoured systems & arch. interview questions[1] is on memory/storage hierarchies). The more interesting problem is what ELSE you've traded off to eliminate I/O on the common path - and in many languages it turns out to be memory fragmentation and/or garbage collection time. In Java, stop-the-world GC pauses of multiple seconds are not unheard of, and it's no fun being Oracle's test bed for undocumented GC features. It's also no fun when you find out you can't restart your multi-gigabyte heap JVM because the underlying OS has fragmented RAM so badly it can't alloc that heap in a single contiguous chunk.


This is something newbies repeat to justify their choice of a piss poor implementation. Sure with 10 users hitting your Rails app it might perform the same as Nginx with a custom C handler talking to a database written in C, but increase the load and soon your handing Amazon thousands a month on ec2 nodes while I am still on a small Linode. The funny thing is because of code gen and experience I can probably write my app faster than you dicking around with framework after framework


App speed matters some. But not nearly as much as a lot of other things. And when speed is the biggest problem, your users can tell you and the fixes are pretty straightforward.

The hard things to solve require a lot of user-facing iteration. Basically, the faster you can try new things, the more likely you'll get product-market fit before you run out of money.

My feelings on Rails are decidedly mixed, but fast prototyping is one of the things they got right.


You're not yourself when you're hungry.


What makes you think you know the problem space enough to make a C implementation that you won't have to rewrite 10 times? Writing in C certainly doesn't stop any rewrites at least where I work.


This. I'm as hardcore C programmer as they come and I still find it better to start with Python and reimplement in C when the application becomes familiar and finite. It's faster, easier and cheaper.

The basic notion is that every bit of code is better the second time its written, and C development is just too slow for the first iteration.


I look forward to seeing C as the go-to web-development language.

Really? C for a problem that is largely string manipulation and database access?


Why are they moving from Ruby if it's just "string manipulation and database access"?

The problem is folks, as soon as you get a large number of users every compromise you made by using a toy language or database is magnified 1000 times. Google didn't implement in some scripting language to get to the market a few months early.


An excerpt from Google history:

----------------------------

Some Rough Statistics (from August 29th, 1996)

BackRub is written in Java and Python and runs on several Sun Ultras and Intel Pentiums running Linux. The primary database is kept on an Sun Ultra II with 28GB of disk. Scott Hassan and Alan Steremberg have provided a great deal of very talented implementation help. Sergey Brin has also been very involved and deserves many thanks.

-Larry Page pagecs.stanford.edu

----------------------------

There's something to be said about rapid prototyping and evaluation.


And now the real heavy lifting for Google web search is C++, as everybody with a clue knows. What's your point?


The point is that most companies don't get to the point that the difference between C++ and Python matters. Worrying more about the business and less about the technology will be more likely to see you succeed than worrying more about the technology and less about the business.

I don't believe all companies can survive with a python or ruby solution. I do think that, as technologists, we worry too much about the "optimal" solution to technical problems when, in most cases, businesses are made or lost in people problems. People problems are really hard because there are few "right" answers. Instead, it is an optimization game, and optimization games require agility.

If you are that amazing at C or C++ that you can iterate amazingly quickly with them, then use them! That will give you a leg up later. I've been developing software for 15 years and have use everything from C and C++ to Java to Python to Objective C; and I've seen a massive difference in my ability to iterate with each of them.

Optimize for what works best for you, but don't be surprised if you choose C++ and a competitor who doesn't care about the "perfect" solution runs circles around you in the market because they chose something different (even if they re-write in C++ in 10 years, after they've stolen all of your customers).


How about worrying about both equally, if you still don't get it see Diaspora, great idea and lots of buzz, but implementation was shit and it was DOA.


That's because Google's initial product was CPU bound. Most web products aren't. Most of Youtube was and is written in Python.


Google didn't implement in some scripting language to get to the market a few months early.

That's actually exactly what they did.


I really need to quit reading HN comments. This entire thread is enraging. Both sides. Especially yours.


It sounds like they went through a major rewrite of their backend and ended up architecting things to be much more performant than their previous system. I'm curious to find out what parts of the system they think contributed most to the performance increase. While this is interesting it is by no means an apples to apples comparison of Node and Rails as the headline suggests.


From original article[1] (Not the highscalability link spam):

>This led them to a model where the application is essentially piping all of its data through a single connection that is held open for as long as it is needed.

So it sounds like from using a request/response driven architecture, they adopted streaming architecture. Also, they moved away from Rails and adopted an Event driven approach.

It is well known fact that, it is almost impossible to stream stuff with Rails currently and hence adopting a event driven approach made sense. I can see, how just these two factors alone will contribute hugely to performance.

[1]: http://arstechnica.com/information-technology/2012/10/a-behi...


Sure, but is it unreasonable to wonder why they need such an architecture? LinkedIn is basically a CRUD app, and while they have a wall now and yadda yadda, I wonder how much of this rearch was really necessary, over simple refactorings and sysadmin attention to the nuts and bolts.


Ability to stream stuff from an Event driven reactor loop makes lot of sense when it comes to raw performance actually.

If you throw in Postgres database in mix which supports asynchronous queries - one can pretty much beat 20 passenger instances serving similar request. The problem again though is, doing non-blocking IO does not reduce database load. So, if likely they re-architected that bit as well.


What is the sense that it makes?


Indeed, shifting to a streaming architecture can vastly reduce the amount of churn your application sees. This makes great sense if it fits the use case of your product. However, one need not throw out a huge existing code/knowledge base in order to accomplish this shift. I think that it's perfectly acceptable to shift to something like EventMachine + some HTTP layers to implement a streaming server using a good chunk of your existing codebase and developer experience. Node.js is the new hotness and so is the prospect of rewriting your application cleaner, faster and better... but often, the less you have to rewrite or learn, the more likely it is that your project will be successful.


If I had a choice I would have certainly picked EM. But I am not sure, how much of Rails code can be ported over to Event driven model of EM. I think the only thing which can be reliably ported is models.

Now, they did evaluate EM apparently and according to them node.js performed 20 times better than EM/twisted in certain tests and hence they went ahead with node.js. I am curious as anyone else to see, what those tests were.


The em-synchrony project (https://github.com/igrigorik/em-synchrony) has patched versions of the mysql gem and they say they can run most of the rails stack. They use a fiber model to un-event the program flow a bit. Ilya is brilliant really. Still, I would be a little leary of betting LinkedIn on a bunch of monkey-patched client code. The challenge in porting to EM is that so many Ruby network clients will block. With Node.js that is impossible. I think though for a rewrite it wouldn't matter, you'd use the right libraries and design your app around the framework just like they had to do switching to Node.


If you <3 EventMachine, check out Celluloid. It's a vastly more sensible approach to asynchronous code in ruby imo - uses the Actor Model (from erlang OTP, etc) to great effect.


The aggressive caching approach taken by Basecamp Next is another example of such an optimization: http://37signals.com/svn/posts/3112-how-basecamp-next-got-to...


DHH shows off some of the New Basecamp caching logic in this video :

http://www.youtube.com/watch?v=FkLVl3gpJP4#t=33m30s


The title is misleading. From original article[1]:

> They found that Node.js, which is based on Google’s V8 JavaScript engine, offered substantially better performance and lower memory overhead than the other options being considered. Prasad said that Node.js "blew away" the performance of the alternatives, running as much as 20 times faster in some scenarios.

So according to original article, Node.js did not perform 20 times better compared to existing Rails based backend. According to Prasad, it performed 20 times better than alternatives of Node such as - Eventmachine & Python Twisted (they did evaluate both of them).

Now I am having hard time believing node.js can outperform Eventmachine or Twisted by 20 times. Most benchmarks I have seen and done tell me, node is marginally ahead. I would obviously like to see, what they benchmarked and how?

1: http://arstechnica.com/information-technology/2012/10/a-behi...


I think they created a specific benchmark for their use case. Maybe they even implemented part of their requirements on both stacks.


They should really release those benchmarks. From what I have seen Python/Gevent performance is on par with Node.js and I doubt Ruby/Eventmachine is that far behind either.

If they found a 20x performance difference there is probably a bug in one of those competing libraries.


I was on the team at LinkedIn when we first wrote the thing on Ruby on Rails. Here's my writeup containing some more context:

http://ikaisays.com/2012/10/04/clearing-up-some-things-about...

While I'll freely admit v8 is much faster than MRI Ruby, the efficiency gains are likely more related to 1) the rewrite factor 2) moving to non-blocking 3) the fact that the original server ... um, needed love


I personally find it hard to believe that all of LinkedIn was only running on 30 servers, and is now running on only 3.

EDIT: Mobile only. Maybe the title should be updated to reflect that.


> I personally find it hard to believe that all of LinkedIn was only running on 30 servers, and is now running on only 3.

I can believe it, if their caching strategy was garbage before and part of a massive rewrite was rethinking that. If you fail to use your caches, you'll pay for it.

I mean it's not like LinkedIn is about split-second changes, you could probably statically generate most of it and serve it with a single nginx instance.


Actually, we aggressively cached, which led to weird things happening. I posted a follow up about this explaining more context - I hope it gets voted up.


The performance improvements have probably NOTHING to do with node.js but with the re-architecture goals set by the team:

"For our inevitable rearchitecting and rewrite, we want to cache content aggressively, store templates client-side (with the ability to invalidate and update them from the server) and keep all state purely client side."

Better understanding of the problem and experience running the system were probably key for building the new high-performance architecture. Obviously, old one lacked these big advantages.


Reading this like: "Hey, our previous backend was a total turd, technically speaking." It might be trivial to speed up your own crappy first implementation 20x with some extra TLC.


Like a lot of contemporary online companies, they might be skimping on sysadmins.


yeah, but it's easier politically to rewrite it in a newer language. If you rewrite it in the same language, then you have to explain your rewrite as cleaning up someone's braindead mistakes-- someone who might still work there.

If you rewrite it in a different language, then you get to blame the old language and framework for all the problems. No harm, no foul-- and you get to put the new shiny thing on your resume.


I looked into Node.js, Sinatra, and Go to handle API traffic for a mobile app a few months ago and did a lot of benchmarking. What I found during my tests, was that Go > (Node.js = Sinatra).

If I had wanted to add Rails to this comparison I would have compared apples to apples and used Metal instead of including the entire stack.


When you make a software change that allows you to reduce your server pool from 30 to 3, you don't say, "27 servers cut". You say "Servers reduced by 90%." I have no idea what "27 servers" actually means: you could have been using a thousand servers for all I know.


The original article:

http://arstechnica.com/information-technology/2012/10/a-behi...

And this is really intersting:

Finally LinkedIn tested, and ultimately chose to adopt, something surprising. The company embedded an extremely lightweight HTTP server in the application itself. The HTTP server exposes native functionality through a very simple REST API that can be consumed in the embedded HTML controls through standard JavaScript HTTP requests. Among other things, this server also provides access to the user’s contacts, calendar, and other underlying platform functionality.


It's a nice stat to see but I think this sort of comparison with "we moved our infrastructure of undisclosed age and unknown bloat to this new infrastructure built for the current problem domain" doesn't really do much for the ongoing conversation.

The article is touted as praise for a stack but my gut says that its really a smart restructuring of how they serve mobile. Either way, good on them for the efficiency boost.


What this article talks about is more than a year old. When I was looking for a tool to start my project I was at the same distance to Python, Ruby and NodeJS and their ecosystem. So when I read about it a year ago I leaned towards NodeJS. So knowing the experience of others always help to certain people at certain point of their yet to unfold story but not to everybody all the time.

I am not unhappy with my choice but I do not have enough data to compare with other tools. I do not think a lot of people have either. Once you start with a tool you tend to keep it since you invested a lot of time learning it as well as developing something with it. I think few can afford switching tools (e.g. FB switched from HTML5 to native recently for their mobile interface).


It's almost impossible to reach 1000 req/sec in Ruby/Rails. It's relatively easy to reach 50000 req/sec in C++. Assuming V8 is about 3x slower than C++, it is not surprising that a move Ruby/Rails->node.js gets 10x throughput improvement.


"Programmers could leverage their JavaScript skills."

I'm confused by that being an advantage - I guess that's better than using some language you don't know at all, but it's still a bit of a different approach to JS, no?


"simplicity is at the heart of LinkedIn’s mobile vision." Then the article goes on to describe complexities of the implementation. Maybe my old and cranky mind has lost touch with what people mean by "simple".


Ah, odd math point, but was I the only one who noticed this sentence:

"focus on simplicity, ease of use, and reliability; using a room metaphor; 30% native, 80% HTML; embedded lightweight HTTP server; "


It's mangled from the original article.


What would they gain if they moved from node to nginx and a c++ with asio backend (e.g. http://cgi.sourceforge.net/) ?


Hmm...

Does going from 30 servers to 3 actually make an impact to a 13ドルB company?


Without citing versions, it's hard to extract anything useful from this article. Rails 2.1 on ree-1.8.7 performs very differently than Rails 3.2 on 1.9.3-p194.


Again with that little war of frameworks? Man there's a lot of money and pride behind all this, dear god.


Is 27 servers a lot?


Can someone update the title to read: "LinkedIn Moved from Server to Client: 27 Servers Cut and Up to 20x Faster"?


But that would be wrong, the server legitiamtely moved from rails to node. Just because node is JavaScript doesn't mean that it automatically runs on the client, JavaScript is a perfectly viable server language too.

The title could be improved by reflecting that this is for LinkedIn mobile, but the rails->node is correct.


Not wrong, a lot of their performance gains were a result of caching their templates client side and having their servers only return data. That is a much smaller load on the server and easily could account for the reduction in servers. The title makes it sound as if they reduced their servers and increased their performance simply by migrating from ruby to node.


This. Many rails apps spend much of their time rendering views, and even a decent caching strategy on the server can increase performance tons.


read all the comments, not a single mention about php, wtf world?


Ruby is one of the slowest languages you can think of, while Javascript on V8 is only 2.3X slower than C++ (median).

http://shootout.alioth.debian.org/u32/which-programs-are-fas...

What I'm really surprised is that they got such a huge gain. Most projects I've worked on are DB or I/O bound. Maybe they store everything in RAM.


They probably fixed query design issues at the same time as they moved implementations. It strikes me as hilariously unlikely that LinkedIn was CPU-bound.


It was a huge gain because the original Rails servers were running on single threaded Mongrel and blocking on cross data center IO.


Replacing inefficient and bloated Rails, with custom coded framework, and byte-code interpreted Ruby for native code generating V8, losing in readability in an order of magnitude? Well, nothing to see here.


I hate Rails. Node is the way to go. Node > Python > Rails


Library > Language > Framework?


What I mean to say...in terms of modern web development, Javascript > Python > Ruby and Node > Django > Rails.


Having an opinion is fine, but you need to support it with an explanation and/or data.


1 server running Nginx with modules + Postgresql == 100 servers running Ruby/Node/Some scripting shit slow language talking to some NoSql crap.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

AltStyle によって変換されたページ (->オリジナル) /