Skip to main content
Web Application Hosting

People in the java land normally deploy products in 3 tiers (webserver (static), app server (dynamic), database server). People in Microsoft land normally deploy applications in 2 tiers (webserver (static and dynamic), database server).

I always hear the argument of separating them for security reasons, but in reality once you own the web tier its pretty easy to own the application tier. It may be slightly more secure, but its not a reason not to patch things (which is what people use it as).

In reality the complexity of having a whole layer to do essentially nothing seems pointless to me. Why bother deploying more hardware for a minor benefit. Even in the java world, the application servers do everything a standard webserver can do, so what's the point?

Any comments, please leave some.

Comments

Anonymous said…
The point is two-fold (at least): choice and adaptability.

Choice - You get to choose the webserv and the appserv. You get best of breed, instead of being tied to a single vendor / product / version. (Also, when updating the webserv, the appserv can take over it's duties, so there's less downtime.)

With the two-tier approach its like hopping around on two feet (webserv and appserv tied together). You can get places just fine, but you better be sure that the spot you're jumping to is safe, and that you'll be able to keep your balance when you land.

With the separate-servers approach, it's like walking normally with both feet. One step at a time, and you always have at least one foot planted on the ground. You can always balance, and use the other foot to probe around and decide where to step next, without too much risk. (As an individual, this allows you to try out different things in a much easier manner --good for leaning and good for job prospects.)

Adaptability - Separating them gives you a chance to alter the mix according to what your site(s)/app(s) do. If you have alot of static traffic (images, html/text files, downloads) you have more webservs. If you're very app heavy, like a B2B market or something, you have more appservs.

So, is it necessary to bother with for the individual? Eh, it's not that hard, and it prepares you with some knowledge for future jobs etc.

Do things 'not work' having two-tiers? Nah. But I would add that the three-tier approach is better suited than the two-tier in places where there are more than three tiers.

I agree that security isn't really a good argument for one or the other. By the time you're thinking about what web/app serv to use etc, you'd better have already thought about securing everything. However, the security of a particular two-tier system versus a particular three-tier system might be a very valid reason for going with one or the other, but not because of the number of tiers. It just happens that in today's offering some of the three-tier systems are more secure, and easier to secure, than some of the two-tier systems.
Unknown said…
I would rather see the usage of a reverse proxy or something which actually gives you added capability than a bunch of webservers that just symply regugitate the content that is being generated on the dymanic side of things. With most products I deal with the content is 99% dynamic, and thus the processing power should be in that area.

Thanks for the comment.

Popular posts from this blog

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later.
Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not?
Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting with an API, t…

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging):
http://devcentral.f5.com/Default.aspx?tabid=53&view=topic&postid=1171255 (registration is free)
when HTTP_REQUEST {
# Check if there is a JSESSIONID cookie
if {[HTTP::cookie "JSESSIONID"] ne ""}{
# Persist off of the cookie value with a timeout of 2 hours (7200 seconds)
persist…

NPM is Broken

As someone who bought and implemented NPM solutions, covered them as an analyst, and now watches the industry, one cannot help but notice that NPM(D) is broken. According to Gartner themselves, the data center is rapidly changing, the data center is going away, maybe not as quickly as Capp states, but it’s happening. This is apparent by the massive public cloud growth posted by Amazon, Microsoft, and Google in their infrastructure businesses. This means that traditional appliance-based NPMD offerings will not work, nor will traditional ways of collecting packet data. Many of the flow offerings do not handle the new types of flows which these services generate, but most importantly they do not understand the internet, which is the most important part of assuring services in cloud hosted environments.
The network itself is not just moving to overlay a-la NSX and ACI, it's moving inside of orchestrated containers, and new proxy/load balancing systems typically built off components or …