Ashburn, VA

BisManOnline Developers Network

Resources, Articles, Information and more for Software Developers and IT Professionals

The BisManOnline system serves over 20 Million Page Views per month across over 1 Million Visits.  In addition, our ad server delivers over 140 Million Banner ads per month.   These systems require infrastructure, efficient caching systems, and databases designed for performance.  We constantly strive for cost-effective solutions to handle our traffic loads.

This section of articles contains information on our tools, hardware, infrastructure, software, and design principles that we have employed, and also information on our own improvement plans as we identify issues with our architecture, and deploy new solutions to our growing load.

Posted: 02/18/2012 08:49 AM uses a number of open source software components to power its increasingly growing platform.  The benefits of Open Source are primarily cost, flexibility, documentation, and a large community of support.

Follow us on our BisManOnline Developers Facebook Page - The page dedicated to our geeky tech stuff.

Open Source allows us to scale faster and at a lower cost than if we were to choose off the shelf, closed-source software.

Below are a few of the technologies we use, how we use them and why:

Apache Web Server

Our front end web servers and our ad server use PHP on an Apache Webserver running on a Red Hat Linux OS.   Red Hat was chosen because of its level of support offered by our hardware technology partner BTINet.

Apache is an extremely flexible web server package and is utilized by over 66% of all websites on the Internet.   Our implementation features some customization and settings that reduce memory load by using only modules required for our service.  In addition, in order to support PHP, we use Apache's Pre-Fork MPM.


Image and static content serving is not a strong point of the Apache web server.  We evaluated a number of platforms and decided on Lighttpd as the solution for serving all static content including images, javascript and CSS.   Lighttpd is a very fast and efficient web server designed with a low memory footprint.  You can view the results of their performance comparison to Apache here.

Out Lighttpd implementation uses a dedicated server for serving our static content.  We keep the server heavy on ram to increase the percentage of items that can be cached in memory to reduce disk-reads.


From the ground up and since its early days, has run on PHP for its code base.  A whopping 77% of all websites on the Internet use some version of PHP as their scripting language.   By design, PHP is extremely customizable, flexible, and has broad community support and documentation.

Although many websites may use an off the shelf framework such as Wordpress, Drupal, CakePHP, etc., we chose to build our own framework from scratch.   Loosely built around the MVC concept (Model - View - Controller) a custom framework gives us more flexibility in tuning the system for performance and our specific requirements.


The MySQL database system provides us our primary data store for BisManOnline's back-end.  MySQL is also used by our ad serving platform for banner ad deliveries and statistics.

We use a combination of MyISAM and InnoDB table types for our data storage depending on various factors (see this comparison of MyISAM vs InnoDB)  and we make use of a large amount of RAM on the DB Server in order to hold as many reads as possible off of the disks and in memory.   In fact, even though our data store is huge and we are hitting it with sometimes thousands of queries per second, our disk-activity on the server barely registers on our real-time performance graphs.

We also implement a custom caching solution within MySQL for handling the majority of our ad list and search requests.   The caching solution ensures that only active data with high-use rates is cached in large tables that prevent us from having to use SQL Joins in real-time.   Because we are pulling the majority of our read and search queries from the cached tables, this speeds our updates and inserts on the base-tables.


One of the biggest performance improvements we made early on was the implementation of the APC PHP Caching System.   The problem with PHP is that it is a scripted language and the PHP Processor compiles and executes the PHP Code in real-time on every request.   The read/compiling of a PHP script on every load is a waste of resources because the PHP Files themselves rarely change except when code changes/revisions are loaded.  The APC system pre-compiles the PHP code and stores it in memory so on subsequent requests the PHP Processor does not need to read the files from disk and re-compile them.  Read more on PHP Accelerators here.

This did add some complexity to our code-change process, but the performance improvement and drop in CPU usage on the web server and the ad server was astronomical.


Caching, caching and more caching.   Usually the biggest bottlenecks in any web application are disks and databases.   Most relational databases are designed for storing and retrieving massive amounts of data, unfortunately, under heavy load with many reads, writes and join's, these requests start to queue waiting for row and table locks, and introduce large performance issues.   Most web developers will find, over time, that they are issuing queries that return the exact same data over and over.

Caching solutions allow the developer to cache the result of a query, and store it for later retrieval without having to make a request to the database.

For years our system used a disk-based file-caching solution.  This, however, introduced new performance problems as we had to continue to read from disk.  Also, a file-caching solution doesn't work when you load-balance your webservers in a cluster and you need to share the cache across more than one node.

Memcached allows us to cache data, counters, etc. in-memory on a centralized shared server that our webserver nodes can access.  In fact, by using memcached to fully cache an entire page, we can process and deliver a fully-cached version of our home page in under 4 milliseconds (server time)  That's 0.004 seconds!.   Roughly 25% of our page-views are delivered fully from a memcached store.

Things like SQL Query results, unread message counters ( which need to be checked in realtime ), ad view counters and other statistics are now stored in-memory for super fast retrieval.   Each memcached server, in a proper configuration can handle anywhere from 50,000 to 120,000 requests per second.   Memcached also has built in scaling that allows you to add nodes to the cluster.  Memcached's ability to route the requests to the cluster is built in meaning you can scale to 20 memcached servers with a single code-change.


Our ad serving platform is based on the open source technologies from OpenX.   Although we have long since forked-off the OpenX source into our own customized version.   Our enhancements include updates to their ad allocation algorithm (math-geek alert) as well as enhancements for mobile ad serving.


In addition to these major technologies, we use a number of other open source products, including ImageMagick, FCKEditor, Sendmail, and more.  And although not technically open source, we use products such as Google Analytics and Quantcast for data measurement, as well as api implementations from Google Maps, the Facebook API, the Ebay Developer Network, and YouTube.

For even more geeky tech updates, follow us on our BisManOnline Developers Facebook Page -  

Posted: 06/18/2011 11:14 AM

Since 1999, when BisManOnline launched as a tiny unknown website until now, almost 12 years later, the overall software and hardware architecture has grown from a single server to the architecture we see today.   And tomorrows architecture will change as new hardware and software becomes available and our overall traffic grows.

Even as of 2006, BisManOnline still ran on a single machine, serving 100% of its requests real-time and primarily un-cached.  Every hit to the home page in 2006 ran a full select SQL statement against the database just to deliver the ad counts.

Over time we began to spread the load to more server instances.  We started with splitting off the ad server, then the database server for the application, the database server for the ad server, memory caching systems, and recently all static image and content serving.   In addition to this we have external systems for real-time traffic monitoring, as well as our data warehouse for reporting, statistics and CRM.

Today's architecture looks a bit like this:

As you can see, our primary architecture is a fairly common LAMP stack, and our real-time traffic monitoring and data warehouse systems are run on Microsoft software.

The overall design is nothing spectacular, and I will discuss some of our current issues / future plans below.

Based on some traffic estimates, and figuring some averages for percentage of content cached by the user's browser, and adding up our page views, ad server calls, image calls, and real-time ajax requests from the front end, this entire architecture serves roughly 440 Million requests per month.   (this is probably a low estimate as we assumed roughly 80% of image and static content requests are delivered via the browser cache - it's probably less than that)

Further scaling that number, based on 90% of our traffic coming from 5am to 11pm, we average roughly 13,000 requests per minute against this architecture, which equates to roughly 215 concurrent requests per second.


All of the servers are Dell, multi-core servers (various), with fat amounts of RAM.  The primary group is hosted at BTINet's data center (see fancy picture here) in Bismarck, ND.   Our configuration is primarily Dell M610's, with dual 4 or 8 core Intel XEON, processors, anywhere from 6GB to 24GB of ram depending on the box, and RAID Level 1, dual 10K SAS drives.

In addition (not pictured), everything resides behind a set of Juniper Firewalls for security.


The primary setup all runs on various configurations of the LAMP Stack (Linux, Apache, MySql, PHP).  In addition to the basic LAMP Stack, we add APC (A PHP Compiler Caching System), and Memcached (An extremely fast and versatile pure-memory caching system).

Our image / static content server uses the very lightweight and super-fast Lighttpd instead of Apache.

In addition, we have additional small chunks of software for handling things like Image Manipulation and Conversion (ImageMagick), plus various other utilities and programs.

Lessons Learned

In the future, we will publish additional articles in this series on how all of this stuff works together, but, here are a few of the biggest lessons we have learned over time

Relational Databases are Slow
Relational databases have their place, and, of course, we use them.  But the nature of their design makes them slow...very slow.   You can continue to throw hardware and ram at them, but you always reach a point where their huge size, and the number of requests you make to them, cannot be handled with a basic setup.  Some of our lessons here are:  
  • Reduce their workload - Cache everything you can in memory, including query results
  • Index everything that is read and perform updates offline
  • Never use Joins - SQL Joins are the worst performing request you can make
  • Avoid table locking with InnoDB
  • Select only what you need from the DB to reduce network load

Hard Disks are Slow

Hard disks are one of the biggest bottlenecks in any system, they are extremely slow, and, if designed right, you can usually keep them out of most of your requests.
  • Make sure your static content servers have enough RAM to deliver the majority of requests directly from memory
  • Use APC OpCode Cache with the OS Stat Calls turned off
  • Generate full-page caches, store them in RAM and deliver directly from memory (Use Memcached)
  • Cache SQL Calls to the DB in memory and retrieve them from there (use Memcached)

Turn off Logging

You simply don't need Apache's real-time traffic logging anymore.  Log your errors and that's it.  Use front-end services like Google Analytics to monitor your traffic.  Apache logging is a waste of resources and I/O against the hard-disks

Issues with our current Design

The number one issue with our current design is single-point of failure.   We currently rely on the uptime of each component in the system to ensure the entire application is functioning.   Our future plans include load balancing the front-end and ad-server, as well as replicating the database.   Currently this is simply a cost benefit issue.  With the excellent resources at BTI we can normally recover from a down server in a very short amount of time.  Once the cost of the  "Amount of Time" begins to increase, it is then that we will deploy redundancy and load balancing


No we haven't written anything earth-shattering here, but, we wanted to show our basic setup.  In future articles we will attempt to break down some of these various systems and discuss in more detail how they work.

Posted: 06/16/2011 2:00 PM
Today we launch this special section on dedicated to information, articles, and more related to the "behind the scenes" software, hardware and people that make BisManOnline tick.

All programming happens in-house, right here in Bismarck, ND.  We greatly shy away from any thoughts of outsourcing any of this work out of state, let-alone, *gulp* going overseas to software development companies in India.

The majority of our primary infrastructure is also hosted right in in Bismarck, ND with our long-time hosting provider, BTINet, whom we have utilized since late 2005.  ("note:, we run one small hosted server out of state that is purely a traffic monitoring system, this ensures that our Monitoring solution exists fully segmented from our primary servers")

The purpose of this series is a number of things really.   Over the years, we have learned a lot, most of it the hard-way, about things that work, and things that don't, or, things that "used" to work, but no longer do as our traffic and user base increases.  So we hope to publish some of the things we've learned here, as well as ideas, some fancy charts, and maybe some code snippets or open sourced stuff for those of you that might be interested.

We also have some other ideas in the works, but, we'll get to that when the time comes.

BisManOnline Monthly Visits since 2007

BisManOnline Monthly Page Views since 2007

Note:  If you're wondering about the page-view drop in September of 2010, we launched the new version of our site that improved overall useability, allowed users to find what they were looking for faster, and resulted in less page-view load, even as the number of visits increased.