Skip to main content

Tons of progress this week – VM, Exchange, Firewalls, Security, Storage/NAS

Had a very productive, and busy week. Balancing out server crashes taking down the production site, and the build-out project we are working on the team was very busy. I built over a dozen VMs, and 3 ESXi boxes.

I also worked on some of the config to finish out the main exchange 2007 implementation. We are waiting for ESXi boxes to ship to the remote offices, which will house a mailbox server, a domain controller, and possible a client access server.

One of my colleagues (who knows the setup of the WAN well) is working on the configuration and testing of our new firewall infrastructure. We are putting in all Sonicwall NSA series appliances. We have a 2400 for Shanghai, and 3500's for Atlanta and Geneva. The production environment will be off a pair of HA 4500 series boxes. The features, ease of setup, and price were excellent. We also have good resources from the reseller and the vendor if we need them.

We are also running a pilot (which has been going quite slow) of the Q1labs qradar product. I have used this product in previous companies, and its been a great tool for security, network analysis, and troubleshooting. The problem is running the pilot with these huge projects that need to be done over the next few months isn't really feasible. I hope to invest more time in testing it, but my priorities are dictating that not to be the case.

I also got rid of a couple older boxes this week by doing P2V using VMware converter starter edition. Now the enterprise ESXi boxes are pretty much full, so I need more disk space in order to do anymore work. That should be happening as we get the NAS implemented.

We wrapped up the Bluearc v. Netapp stuff, we are going with a couple of Netapps. The solution looks very good, and we should have it up in a couple weeks. I'm looking forward to building out some Exchange, MSSQL, and ESXi clusters using it!

Talk to you all soon, have a nice weekend. Loving the weather in Georgia J

Comments

Popular posts from this blog

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later. Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not? Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting wi

NPM is Broken

As someone who bought and implemented NPM solutions, covered them as an analyst, and now watches the industry, one cannot help but notice that NPM(D) is broken. According to Gartner themselves, the data center is rapidly changing, the data center is going away, m aybe not as quickly as Capp states, but it’s happening. This is apparent by the massive public cloud growth posted by Amazon, Microsoft, and Google in their infrastructure businesses. This means that traditional appliance-based NPMD offerings will not work, nor will traditional ways of collecting packet data. Many of the flow offerings do not handle the new types of flows which these services generate, but most importantly they do not understand the internet, which is the most important part of assuring services in cloud hosted environments. The network itself is not just moving to overlay a-la NSX and ACI, it's moving inside of orchestrated containers, and new proxy/load balancing systems typically built off component

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging): http://devcentral.f5.com/Default.aspx?tabid=53&view=topic&postid=1171255 (registration is free) when HTTP_REQUEST { # Check if there is a JSESSIONID cookie if {[HTTP::cookie "JSESSIONID"] ne ""}{ # Persist off of the cookie value with a timeout of 2 hours (7200 seconds) p