snearch + scaling_website   156

6 Rules of thumb to build blazing fast web applications (server side) | Loige

Horizontally scalable web applications by Inviqa
Horizontally Scaling PHP Applications: A Practical Overview by Digital Ocean
Best Practices For Horizontal Application Scaling by OpenShift
Scalable Web Architecture and Distributed Systems by Kate Matsudaira
Intuitively Showing How To Scale A Web Application Using A Coffee Shop As An Example by HighScalability
Book The art of scalability by Martin Abbot and Michael Fisher
Slides 7 Stages of scaling web applications by Rackspace
Webdevelopment  scaling_website 
september 2015 by snearch
Haskell for all: Scalable program architectures
Category theory is full of generalized patterns like these, all of which try to preserve that basic intuition we had for addition. We convert more than one thing into exactly one thing using something that resembles addition and we convert less than one thing into exactly one thing using something that resembles zero. Once you learn to think in terms of these patterns, programming becomes as simple as basic arithmetic: combinable components go in and exactly one combinable component comes out.

These abstractions scale limitlessly because they always preserve combinability, therefore we never need to layer further abstractions on top. This is one reason why you should learn Haskell: you learn to how to build flat architectures.
TOP  Inspiration  Haskell  Architektur  Software  Software_Engineering  category_theory  abstractions  scaling_website  flat_architectures 
april 2014 by snearch
The Second Coming of Java: A Relic Returns to Rule Web | Wired Enterprise |
Originally, Twitter was one, monolithic application built with Ruby on Rails. But now, it’s divided into about two hundred self-contained services that talk to each other. Each runs atop the JVM, with most written in Scala and some in Java and Clojure. One service handles the Twitter homepage. Another handles the Twitter mobile site. A third handles the application programming interfaces, or APIs, that feed other operations across the net. And so on.
Twitter  Java  Scala  Clojure  scaling_website 
september 2013 by snearch
Hacker Chat: Pinboard Creator Maciej Ceglowski Talks About Why Boring Architecture is Good, and More – ReadWrite
The following come to mind, in no particular order:

Take advantage of the fact that it's 2011 and you can load servers up with RAM.
Use a RDBMS and take the time to learn it very thoroughly. High-Performance MySQL (the O'Reilly Book) and the Percona blog are indispensible for MySQL; I'm sure similar resources exist for Postgres.
Use dedicated (not virtualized) hardware. I/O can be awful on virtualized servers and debugging I/O slowness there is next to impossible.
Use caching as a last resort, not as a fundamental design strategy. It's 2011 - unless you have millions of users, your app should be able to run fast with all caches disabled.
Use frameworks for prototyping, but build from the ground up once you know what you're building.
Resist excessive abstraction
Set performance targets for yourself. For example, one goal for Pinboard is a median page load time of under a third of a second. This will force you to instrument well and optimize appropriately.
Entrepreneurs/Freelancer  Interview  Ceglowski_Maciej  architecture_software  LAMP  optimization  print!!!  boring  Business  Entrepreneurship  mehr_A_verdienen  scaling_website 
august 2013 by snearch
Quora’s Technology Examined | Big Fast Blog

Just like Facebook, where co-founder Adam D’Angelo previously worked, Quora heavily uses MySQL. In answer to the Quora question “When Adam D’Angelo says “partition your data at the application level”, what exactly does he mean?“, D’Angelo goes into the details of how to use MySQL (or relational-databases generally) as a distributed data-store.

The basic advice is to only partition data if necessary, keep data on one machine if possible and use a hash of the primary key to partition larger datasets across multiple databases. Joins must be avoided. He sites FriendFeed’s architecture as a good example of this. FriendFeed’s architecture is described by Bret Taylor in his post “How FriendFeed uses MySQL to store schema-less data“. D’Angelo also states that you should not use a NoSQL database for a social site until you have millions of users.

It is not only Quora and FriendFeed who are heavily using MySQL. Ever heard of “Google”? It is hard to imagine, since everything Google does has to scale so well, but in the words of Google, “Google uses MySQL [...] in some of the applications that we build that are not search related”. Google has released patches for MySQL related to replication, syncing, monitoring and faster master promotion.  Python  MySql  print!!  scaling_website 
may 2013 by snearch
DynamoDB One Year Later: Bigger, Better, and 85% Cheaper… - All Things Distributed
From our own experience designing and operating a highly available, highly scalable ecommerce platform, we have come to realize that relational databases should only be used when an application really needs the complex query, table join and transaction capabilities of a full-blown relational database. In all other cases, when such relational features are not needed, a NoSQL database service like DynamoDB offers a simpler, more available, more scalable and ultimately a lower cost solution.
Vogels_Werner  amazon  DynamoDB  Erlang  NoSQL  vs.  SQL  PROs  CONs  scaling_website 
march 2013 by snearch
A love affair with PostgreSQL | Hacker News
taligent 46 minutes ago | link

No it's not like MySQL which has MySQL Cluster a supported, well documented, OOTB, easy to use solution with lots of enterprise customers. Similarly Percona has a very, very impressive product with great support.

Instagram had to roll their own and Postgres XC, PGPool both have always seemed pretty sketchy. No official support, no notable customers, shocking complexity and everything just seems all over the place with documentation from say 2009 referencing PostgreSQL 8 e.g. PgPool beginner guide. They may be great solutions but do they really look like something that inspires confidence ?

These guys seem to have the right idea:

It would just be nice to have something as simple and polished built into PostgreSQL.
Postgresql  CONs  horizontal_scaling  scaling_website 
january 2013 by snearch
Twitter survives election after Ruby-to-JVM move | Hacker News
trimbo 1 hour ago | link

They use both.

"Last week, we launched a replacement for our Ruby-on-Rails front-end: a Java server we call Blender. We are pleased to announce that this change has produced a 3x drop in search latencies and will enable us to rapidly iterate on search features in the coming months."


mnutt 25 minutes ago | link

At the same time they did that, they replaced MySQL with a real-time version of Lucene.

Almost every one of these "we switched from A to B and got a 3x speed increase" articles conflates a lot of different variables. The first version you build when you have no traffic and product/market fit is the most important thing. Performance is a low priority. Eventually it hits a bottleneck and you begin to look at performance. Perhaps there is another language out there that is faster than the one you're using. At this point nobody says "let's do an exact code translation from A to B". As you rewrite, you keep a constant eye on performance. It often involves ripping out abstractions and moving closer to the metal. The system you end up with usually looks nothing like the one you started out with, nor should it since it is the product of all of your experience scaling up to that point.

Scala  Java  Lucene  scaling_website 
november 2012 by snearch
Rails like framework for C++ with great speed | Hacker News
wheels 3 hours ago | link

We have two main stacks at my company. One is Rails-based, the other is a mix of C++ and Java. The tasks that the C++ and Java side do is significantly more complex (real-time graph-traversal and number crunching) than what Rails is doing.

On similarly sized server instances our Rails stack can handle about 10 requests per second, whereas our C++ / Java stack can handle about 400.

In practice the only time that I've seen the database be the bottleneck in Rails is when the database is accessed by people that pretend that ActiveRecord is in-process data and just use it like it's not querying a database (e.g. I've seen pageviews that require 30+ SQL queries). That or their database is set up in stupid ways and doesn't have indexes in the right places.

Rails is actually very slow. That said, I think a smarter approach than recreating the framework in faster languages would be to allow C or C++ into the main Rails project and to rewrite the hot parts in one of them. We sped up ActiveResource by about 35x by reimplementing it in C++ (with no changes required to our apps).
C++  Webdevelopment  high_speed  scaling_website 
august 2012 by snearch
Facebook Engineering: What is the set of tools used by the dev team at Facebook? - Quora
Daniel Fernandes, Student
21 votes by Robb Shecter, Saurabh Sharan, Sundeep Yedida, (more)
This Facebook Tech Talk documents some of their internal workflows:

Some things of note:

They created an internal IRC bot that manages commits. The IRC bot will answer questions about where/when your code was pushed.
The bot will also ask if you are awake, able, and ready to support your commits in production, which will indicate to release engineers that you will be there to support your code if something breaks. You can also vouch for other engineer's commits, in case they are out-of-office.
A "karma" system for judging the reliability of a programmer's commits
'Dogfooding' through with new changes, and is the final product before being pushed to production
A custom tool is used for bugtracking, along with an e-mail alias to complain about bugs.
Bittorrent is used to populate production servers with new code
Gatekeeper is Facebook's method of gradually deploying changes by demographics (by age, country, IP Whitelist/Blacklists, exposure)
Phabricator (now opensourced) is a discussion framework for engineers to talk about commits before they are pushed.
Profession  Developers_Toolbox  Bittorrent  Facebook  Deployment  Website  Workflow  Erfolgsstrategien  dogfooding  scaling_website 
may 2012 by snearch
The Instagram Architecture Facebook Bought for a Cool Billion Dollars | Hacker News
nostromo 3 hours ago | link

I can't see this as a user acquisition play. The number of Instagram users that are already on Facebook must be, what, 90%? Maybe more?

So, it's obviously not for talent ($100mm per head? no way...), users, or tech. That leaves it as a competitive play. They took Instagram off the market to keep it out of the hands of Apple and Google.
We'll just tl;dr the article here, it's very well written and to the point. Definitely worth reading. Here are the essentials:

Lessons learned: 1) Keep it very simple 2) Don’t re-invent the wheel 3) Go with proven and solid technologies when you can.
3 Engineers.
Amazon shop. They use many of Amazon's services. With only 3 engineers so don’t have the time to look at self hosting.
100+ EC2 instances total for various purposes.
Ubuntu Linux 11.04 (“Natty Narwhal”). Solid, other Ubuntu versions froze on them.
Amazon’s Elastic Load Balancer routes requests and 3 nginx instances sit behind the ELB.
SSL terminates at the ELB, which lessens the CPU load on nginx.
Amazon’s Route53 for the DNS.
25+ Django application servers on High-CPU Extra-Large machines.
Traffic is CPU-bound rather than memory-bound, so High-CPU Extra-Large machines are a good balance of memory and CPU.
Gunicorn as their WSGI server. Apache harder to configure and more CPU intensive.
Fabric is used to execute commands in parallel on all machines. A deploy takes only seconds.
PostgreSQL (users, photo metadata, tags, etc) runs on 12 Quadruple Extra-Large memory instances.
Twelve PostgreSQL replicas run in a different availability zone.
PostgreSQL instances run in a master-replica setup using Streaming Replication. EBS is used for snapshotting, to take frequent backups.
EBS is deployed in a software RAID configuration. Uses mdadm to get decent IO.
All of their working set is stored memory. EBS doesn’t support enough disk seeks per second.
Vmtouch (portable file system cache diagnostics) is used to manage what data is in memory, especially when failing over from one machine to another, where there is no active memory profile already.
XFS as the file system. Used to get consistent snapshots by freezing and unfreezing the RAID arrays when snapshotting.
Pgbouncer is used pool connections to PostgreSQL.
Several terabytes of photos are stored on Amazon S3.
Amazon CloudFront as the CDN.
Redis powers their main feed, activity feed, sessions system, and other services.
Redis runs on several Quadruple Extra-Large Memory instances. Occasionally shard across instances.
Redis runs in a master-replica setup. Replicas constantly save to disk. EBS snapshots backup the DB dumps. Dumping on the DB on the master was too taxing.
Apache Solr powers the geo-search API. Like the simple JSON interface.
6 memcached instances for caching. Connect using pylibmc & libmemcached. Amazon Elastic Cache service isn't any cheaper.
Gearman is used to: asynchronously share photos to Twitter, Facebook, etc; notifying real-time subscribers of a new photo posted; feed fan-out.
200 Python workers consume tasks off the Gearman task queue.
Pyapns (Apple Push Notification Service) handles over a billion push notifications. Rock solid.
Munin to graph metrics across the system and alert on problems. Write many custom plugins using Python-Munin to graph, signups per minute, photos posted per second, etc.
Pingdom for external monitoring of the service.
PagerDuty for handling notifications and incidents.
Sentry for Python error reporting.
Facebook  Hintergründe  scaling_website  acquisition 
april 2012 by snearch
High Scalability - High Scalability - Architecture - Pay to Play to Keep a System Small  

Technical Underpinnings
net@night Interview
Personal Email


16.3 million bookmarks
52 million tags
9.4 million urls
989 GB archived content
A little under 1 hour cumulative downtime since July 8th, 2010.


Cron jobs


Machine 1: 64 GB, runs a database master, stores user archives and runs search
Machine 2: 32 GB, runs the failover master, crawls various outside feeds, does background tasks
Machine 3: 16 GB, web server and database slave


A copy of the database is kept on all three machines.
The website runs on the 16GB machine. The database fits entirely in RAM and page load times have improved by a factor of 10 or more.
Master-master architecture with an additional read slave. All writes are pointed at one DB, these include bookmark, user, and tag tables.
The second master runs:
Aggregate calculations like global link counts and per-user statistics.
Nightly DB backups using mysqldump. The backup is stored to S3 in a compressed format.
Perl scripts run background tasks:
Downloading outside feeds, caching pages for users with the archive feature enabled, handling incoming email, generating tag clouds, and running backups.
Perl was chosen because of an existing skill set and the large support library available in CPAN.
Features like most popular bookmarks are generated by a cron job that is generally run each night, but are turned off when the load becomes too high.
PHP is used to generate HTML pages:
No templating engine is used. No frameworks are used.
APC is used to cache PHP files.
No other caching is used.
Sphinx is used for the search engine and for global tag pages.

Lessons Learned
Have a mantra. Pinboard has the goals of: being fast, reliable, and terse. They think these are the qualities that will earn and keep customers. When a problem comes up, like massive growth, they always prioritize so that these system qualities are maintained. For example, their first priority is preventing data loss, which dictated changing their server architecture. So the site seems conceptually confusing during a period of growth that's OK...if the site is quickly and reliably saving links.
Start as a paid site from the beginning. The advantage of being a paid site is you don't get the rush of new users, so you can stay small. When the demise of Delicious was announced, if they would have been a free site they would have been down immediately, but being a paid site helps smooth out the growth.
Charge based on the number of users. Pinboard has a unique pricing scheme that is designed to scale better than services with free accounts. The price is based on the current number of users. As the number of users goes up the price goes up. People are paying for the resources are using. This is similar, but enticingly different than the Amazon or Google App Engine payment model. This is a one time fee. For an extra $25/year all your bookmarks can be cached and searched.
Use boring and faded technologies. These help ensure the site will never lose data and be very fast.
A rule of thumb: if you are excited to play around with something, it probably doesn't belong in production.
Make switching as simple as possible. Pinboard removes adoption objections by automatically importing and exporting to Delicious and by supporting the Delicious API.
Staying small is much more fun. When you can offer personal customer support and interact directly with users you'll have a much better time.
Compare machine costs based on dollars per GB of RAM or storage. Pinboard originally ran on Slicehost and Linode, but they moved to a different service when the cost expressed in dollars per GB of RAM or storage was far higher, without any offsetting benefits.
Turn off features under load. Turn off search, for example, if you need performance elsewhere.
A medium to large site is the most expensive. Small sites are relatively cheap to run, but at some point during the growth curve the marginal cost of each new user increases. It costs more because data has to be split across multiple machines and those machines must be bought and managed. There's a scaling cost. Once you get into millions of users it gets cheaper again. Those first steps from where you go to a tiny site to a medium or medium large site are painful and expensive.
Call the shots on your own product. Depending on who you believe, Delicious was harmed by the continual layoffs at Yahoo, but the real problem was the Delicious team were not the decisions makers. New features were prioritized over reliability, stability, and innovation. It doesn't matter how hard or long you work when your fate is in the hands of others.
Small doesn't always work. A storm of new users added over seven million bookmarks, more than were collected over the entire lifetime of the service, and traffic to the site was over a hundred times normal. As a result normal background tasks like search, archiving, and polling outside feeds were suspended. An elastic strategy to handle spike loads like these isn't all bad.
Look at outlier page load times, not median page load times to judge the quality of your service. It's not acceptable if a page can take multiple seconds to load even is most of the page load times are acceptable.
Punt on features to build quickly. Pinboard was built quickly partly because social and discovery features were deferred by saying "go somewhere else for that." Other sites will let you share links with friends and discover new and interesting content, but no other site acts like a personal archive and that's Pinboard's niche.
Segregate services by machine. When a web server shares a machine with other services the web server can take a hit. Another example is once each day the search indexer would wrestle with MySQL over memory while it did a full index rebuild.

Related Articles

What The “Great Delicious Exodus” Looked Like For Pin-Sized Competitor Pinboard by Erick Schonfeld. The service wasn’t handling a huge number of requests to begin with—a few hundred per minute at peak—but that number increased about tenfold to over 2,500 requests per minute.
Quick thoughts on Pinboard by Matt Haughey
Back To Basics: Ditch Delicious, Use Pinboard by Michael Arrington
Why Is My Favorite Bookmarking Service by Ben Gross, Thank You : Pinboard, Welcome by Stephen O'Grady  TOP  Inspiration  Business  Website  Entrepreneurship  Webservices  no_Framework  print  Erfolgsprinzip  scaling_website 
february 2012 by snearch
Hacker News | Ask HN: What is your preferred Python stack for ...
espeed 10 hours ago | link

* haproxy - frontline proxy
* varnish - app server cache
* nginx - static files
* uwsgi - app server
* flask - primary framework
* tornado - async/comet connections
* bulbflow - graph database toolkit
* rabbitmq - queue/messaging
* redis - caching & sessions
* neo4j - high performance graph database
* hadoop - data processing
Python  website  stack  webdevelopment  server_side  scaling_website 
august 2011 by snearch
Why does Quora use MySQL as the data store rather than NoSQLs such as Cassandra, MongoDB, CouchDB, etc? - Quora
4. You can actually get pretty far on a single MySQL database and not even have to worry about partitioning at the application level. You can "scale up" to a machine with lots of cores and tons of ram, plus a replica. If you have a layer of memcached servers in front of the databases (which are easy to scale out) then the database basically only has to worry about writes. You can also use S3 or some other distributed hash table to take the largest objects out of rows in the database. There's no need to burden yourself with making a system scale more than 10x further than it needs to, as long as you're confident that you'll be able to scale it as you grow.

5. Many of the problems created by manually partitioning the data over a large number of MySQL machines can be mitigated by creating a layer below the application and above MySQL that automatically distributes data. FriendFeed described a good example implementation of this [4].
MySQL  NoSQL  MongoDB  CouchDB  Cassandra  scaling_website 
february 2011 by snearch
Stack Exchange’s Architecture in Bullet Points -
Software and Technologies Used:

* C# / .NET
* Windows Server 2008 R2
* SQL Server 2008 R2
* Ubuntu Server
* CentOS
* HAProxy for load balancing
* Redis for caching
* CruiseControl.NET for builds
* Lucene.NET for search
* Bacula for backups
* Nagios (with n2rrd and drraw plugins) for monitoring
* Splunk for logs
* SQL Monitor from Red Gate for SQL Server monitoring
* Mercurial / Kiln for source control
* Bind for DNS
software  StackExchange  StackExchange_Network  StackOverflow  ServerFault  SuperUser  architecture_software  scaling_website 
february 2011 by snearch
Hacker News | Why HN was slow and how Rtm fixed it
5 points by jrockway

All control flow is a subset of continuations. The stack is a continuation (calling a function is call-with-current-continuation, return is just calling the "current continuation"), loops are continuations (with the non-local control flow, like break/last/redo/etc.), exceptions are continuations (like functions, but returning to the frame with the error handler), etc. Continuations are the general solution to things that are normally treated as different. So continuations are just as efficient (or inefficient) as calling functions or throwing exceptions.

In a web app context, though, it's kind of silly to keep a stack around to handler something like clicking a link that returns the contents of database row foo. People do this, call it continuations, and then run into problems. The problem is not continuations, the problem is that you are treating HTTP as a session, not as a series of request/responses. (The opposite of this style is REST.)
TOP  inspiration  asynchronous_event_processing  kqueue  epoll  Multithreading  MzScheme  Racket  Haskell  modern  Perl  node.js  fd  eintauchen  Lernherausforderung  FreeBSD  select  poll  Sockets  server_side  reverse_proxying  nginx  Yahoo  Filo_David  Rockway_Jonathan  cps_continuation_passing_style  scaling_website  Client/Server  higher_quality 
january 2011 by snearch
Apache-Modul für schnellere Webseiten | iX
Als Gegenstück zu seinem Firefox-Add-on Page Speed, das die Lade- und Ausführungszeiten von Webseiten im Browser misst, präsentiert Google das Apache-Modul mod_pagespeed. Es soll je nach Einstellungen einige Optimierungen in Webseiten vornehmen, bevor sie den Browser erreichen.
Apache  Google  mod_pagespeed  scaling_website 
november 2010 by snearch
High Scalability - High Scalability - Facebook at 13 Million Queries Per Second Recommends: Minimize Request Variance
Facebook gave a MySQL Tech Talk where they talked about many things MySQL, but one of the more subtle and interesting points was their focus on controlling the variance of request response times and not just worrying about maximizing queries per second.
Facebook  mysql  scaling_website 
november 2010 by snearch
Hacker News | Minecraftwiki serving more traffic than Stackoverflow with 4 servers (and PHP)
5 points by citricsquid 3 hours ago | link

We're looking at a couple of alternatives and I can report back with our findings when we know enough, right now we're looking at vbulletin which powers some of the larger forums, it is apparently very good if you're willing to strip out the poor parts (apparently search is terrible).

It looks as if the best approach is to roll your own, phpbb seems to be designed with the smaller user in mind, so while routing every single file through file.php for easy processing might work well for Johnny and his friends forum, once you hit a large scale it becomes rather a pain.

So yeah, no idea, we'll find out soon though, I'll report back when I know :-)
october 2010 by snearch
how we got 100,000 visitors without noticing - the historious blog
Answering the question "how didn't we notice?" is now easier. We didn't notice because not many of those visitors actually visited the main page of the service, and even fewer people signed up as a result! This is less than ideal from a business standpoint, but it showcases the potential historious has, and we are always happy when we provide such a good user experience that the user doesn't even realise that they were on historious!

"Surely, however, your server would exhibit some load with all these people?!" you ask. Well, no. You see, we have Varnish serving as a caching frontend for media and some pages, including cached pages. None of the requests ever hit Apache (or, indeed, got anywhere past Varnish), and Varnish is so efficient at serving pages that the site load remained at a cool 0.00(!) for the entire duration of the incident.
TOP  inspiration  website  Startup  Business  print  scaling_website 
october 2010 by snearch
Things I learnt tracking a billion events in 24 hours | Playtomic Blog
Kill switches for users

I obviously need a way to deactivate games so this can't happen in the future. Having all the traffic routed through a single subdomain means I can't disable any individual game, whatever I do still leaves them using resources. In the not-too-distant future I'll set things up so every game uses its own unique subdomain which I can re-route to nowhere so it's one person's problem instead of everybody's. The API in games is self-disabling so when it encounters connectivity issues it just stops trying to send, so this solution should make yesterday very easy to avoid in the fut
september 2010 by snearch
High Scalability - High Scalability - Digg: 4000% Performance Increase by Sorting in PHP Rather than MySQL
O'Reilly Radar's James Turner conducted a very informative interview with Joe Stump, current CTO of SimpleGeo and former lead architect at Digg, in which Joe makes some of his usually insightful comments on his experience using Cassandra vs MySQL. As Digg started out with a MySQL oriented architecture and has recently been moving full speed to Cassandra, his observations on some of their lessons learned and the motivation for the move are especially valuable. Here are some of the key takeaways you find useful:
SQL  MySQL  Php  scaling_website 
march 2010 by snearch
Cassandra (database) - Wikipedia, the free encyclopedia
Cassandra is an open source distributed database management system. It is an Apache Software Foundation top-level project[1] designed to handle very large amounts of data spread out across many commodity servers while providing a highly available service with no single point of failure. It is a NoSQL solution that was initially developed by Facebook and powers their Inbox Search feature[2]. Jeff Hammerbacher, who led the Facebook Data team at the time, has described Cassandra as a BigTable data model running on an Amazon Dynamo-like infrastructure[3].
Cassandra  reddit  Startup  NoSQL  Facebook  scaling_website 
march 2010 by snearch
Presentation Summary “High Performance at Massive Scale: Lessons Learned at Facebook” « Idle Process
Jeff also relayed an interesting philosophy from Mark Zuckerberg: ”Work fast and don’t be afraid to break things.” Overall, the idea to avoid working cautiously the entire year, delivering rock-solid code, but not much of it. A corollary: if you take the entire site down, it’s not the end of your career.
Facebook  Zuckerberg_Marc  Startup  scaling_website 
december 2009 by snearch
What Software or Programming Language do sites like Facebook or Myspace use?
Building one of these sites is a MASSIVE undertaking. I was involved building a white-label social networking system last year - it took 4 programmers and nearly 6 months of work. Seriously, the best advice I can give you regarding setting up a "niche" social network is to use an off-the-shelf product rather than build your own. There are so many of these social networks - the hard part is not the software, but the marketing. A really good commercial option is phpFoX » Social Networking Script. This is pretty slick has ALL the features you need for your own site. They will host it for you, you can extend it if you need to using PHP. To get started today, this is my recommendation. Lovd By Less -- Open Source Social Network -- Who loves you, baby? is an open-source ruby on Rails system. Ruby on Rails is my development platform of choice - I am easily 20-30% more productive in Rails and I charge my clients accordingly. This is not as f
Startup  Social_Networks  Sites  Branchenlösungen  memcached  scaling_website 
december 2009 by snearch
Hacker News | HTTP Intermediary Layer From Google Could Dramatically Speed Up the Web
Let's drop some of that flash and turn on gzip compression (if you haven't done that already) and make sure your cache headers are set properly. That alone will probably give you a 50% boost.
webserver  http  beschleunigen  scaling_website 
november 2009 by snearch
« earlier      
per page:    204080120160

related tags

*BSD  14_rules  abstractions  acquisition  advertising  amazon  amazon::EC2  Apache  appengine  architecture_software  Architektur  asynchronous_event_processing  Attribut  Becker_Scott  beschleunigen  Bittorrent  book_recommendation  boring  Branchenlösungen  business  C++  caching  CARP  Cassandra  category_theory  CDN  Ceglowski_Maciej  Celery  cgi  cgi::REST  Client/Server  Clojure  cloud_computing  comet  COMMON_LISP  CONs  Cookies  CouchDB  cps_continuation_passing_style  CSS  d14  Databases  decoupling  Deployment  Developers_Toolbox  Digg  Digital_Ocean  Disqus  Django  dogfooding  DynamoDB  ebay  ec2  eintauchen  ELB  Entrepreneurs/Freelancer  Entrepreneurship  epoll  Erfolgsgeschichte  Erfolgsprinzip  Erfolgsstrategien  Erfolgstypen  erlang  Facebook  fastcgi  faszinierend  fd  filetype:pdf  Filo_David  Finagle  Firewall  Fitzpatrick_Brad  flat_architectures  Flickr  Framework  FreeBSD  Gentoo  Golang  Google  Google+  Graphite  Groupcache  haproxy  Haskell  higher_quality  high_speed  Hintergründe  horizontal_scaling  http  http::cacheing  indexing  inspiration  Interview  Java  Jetty  Justin_TV  kqueue  LAMP  Lernherausforderung  Linode  Linux  LISP  load_balancing  Lucene  m09  MapReduce  Marketing  Master_Slave_Replication  media:document  mehr_A_verdienen  Memcache  memcached  model  modern  mod_pagespeed  mod_perl_2.0  MongoDB  Mongrel  Multithreading  mysql  MzScheme  nginx  node.js  NoSQL  no_Framework  Opera_Mini  optimise_DB_Server_load  optimization  OS_X  PayPal  Perl  Perl::Catalyst  php  php-fpm  php5  Platform  Plenty_Of_Fish  poll  poor_man's  Postgresql  print  print!  print!!  print!!!  Profession  Professional_Software_Development  Profiling  PROs  PyPy  Python  Quora  RabbitMQ  Racket  Ragel  Rails  RDS  real_time_bidding  reddit  Redis  REST  reverse_proxying  Rockway_Jonathan  RPC  Ruby  S3  Scala  scaling_website  Scheme  select  self_made  ServerFault  server_side  setup_more_slave_servers  sharding  Shyu_Patrick  simulating_pressure  Sites  slashdotted  SOA  Social_Networks  Sockets  software  Software_Engineering  SQL  stack  StackExchange  StackExchange_Network  Stackoverflow  StackOverflow  startup  SuperUser  Tipps_und_Tricks  Tokyo_Cabinet  TOP  Trading  Tumblr  Twitter  Unicorn  Unternehmensgründung  upgrade_hardware_RAM  uWSGI  Varnish  Video  Vogels_Werner  vs.  Webapplications  webdevelopment  Webdevelopment::Caching  Webhosting  webserver  Webserver-Technologies  Webservices  website  WhatsApp  Workflow  WSGI  Xen  y2018  Yahoo  YouTube  Zotonic  Zuckerberg_Marc 

Copy this bookmark: