Skip to main content

Been in Geneva doing a datacenter build out

Built out our new datacenter, I was in Geneva for the last 12 days. We are getting ready to move our US hosting facility to Geneva to get ready to merge the MFG and sourcingparts environments. Its going to take several months overall, but the first steps are going to occur in the next few weeks. Everything went really well, we installed and moved a lot of gear, servers, F5 Big-ips, (Unknown Vendor) HA firewalls, and a nice Netapp 3040 dual head cluster.

The only issue was that we got a quad 1gig card in it versus the 10G copper card we wanted. Netapp only makes a 10G fiber card, so we had to buy different switch modules as well. Netapp ate the extra cost on the cards for us, which was nice. We also can't run LACP on the 10G fiber cards, so we lose switch redundancy.

The Netapp is currently serving Vmware ESXi over NFS, and it screams. We also are using iSCSI for our MSSQL clusters. The speeds on Vmware and iSCSI are very good even with the current 4G we are running until we get the new cards in. Working perfectly, and very happy about it.

The new network and server design is nice and clean, and its working perfectly as well. Still have a lot to build in the 1st environment we are working on, as well as the other environments we have to build out. Going to be back in Geneva to finish the last physical moves in Feb or so, but we'll be doing things remotely, and moving as much as possible as well. My team over there has been doing great, as well as the US team who was responsible for most of the implementation and design. Great job to all parties involved, I'm very excited by the progress, and the flexibility we'll have with our new 70 disk Netapp system, networking improvements, and the larger Vmware environment.

More later.


Anonymous said…
Wow this is so exciting Jonah!
Thanks for sharing!

Popular posts from this blog

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later.
Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not?
Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting with an API, t…

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging): (registration is free)
# Check if there is a JSESSIONID cookie
if {[HTTP::cookie "JSESSIONID"] ne ""}{
# Persist off of the cookie value with a timeout of 2 hours (7200 seconds)

NPM is Broken

As someone who bought and implemented NPM solutions, covered them as an analyst, and now watches the industry, one cannot help but notice that NPM(D) is broken. According to Gartner themselves, the data center is rapidly changing, the data center is going away, maybe not as quickly as Capp states, but it’s happening. This is apparent by the massive public cloud growth posted by Amazon, Microsoft, and Google in their infrastructure businesses. This means that traditional appliance-based NPMD offerings will not work, nor will traditional ways of collecting packet data. Many of the flow offerings do not handle the new types of flows which these services generate, but most importantly they do not understand the internet, which is the most important part of assuring services in cloud hosted environments.
The network itself is not just moving to overlay a-la NSX and ACI, it's moving inside of orchestrated containers, and new proxy/load balancing systems typically built off components or …