Skip to main content

Datacenter Automation Decisions - Rev2

As referenced by my last post, you can see this is high priority for everyone at my company. We have decided that Bladelogic can fit one area very well, but the vision, strategy, and direction of the company is not in line with our needs. Bladelogic had a stronger POC team, and better talent as well. If Opsware can do what we need for the application configuration management and deployment, then it would be the best choice for the following reasons:

  • Dependency mapping and use of the agents to do that.
    • The EMC/nlayers solution is better but not feasible for our network
    • Opsware has an excellent visual application manager which can help us troubleshoot problems and changes quickly.
  • End to end view of Network and soon Storage assets.
    • We have started a POC with the NAS product, and it can deliver very good network data to the server tool.
    • Alterpoint is next week, who Bladelogic partners with for network info in the tool. The integration is not as tight, which is understandable.
  • Scalability and resiliency.
    • Bladelogic has a lack of built in replication and agent failover. The agent and replication can be adapted using 3rd party tools.
    • Our ultimate scope is over 20,000 systems with varying uses of the tool.

We are going to deploy Opsware before a general decision is made for the company, and if the product cannot do what we need we will go with Bladelogic. Either way with our lack of centralized authentication, the major issue is going to be getting some kind of agent to deploy software on the systems. We will see how this pans out.

Needless to say Bladelogic is not happy about this decision, and rightly so. I have explained to them that this is not a final decision, but it’s a better direction due to strategy and needs of our business. Software is a mix of capabilities, direction, and the suite of offerings a single entity can offer us.

Comments

Popular posts from this blog

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later.
Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not?
Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting with an API, t…

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging):
http://devcentral.f5.com/Default.aspx?tabid=53&view=topic&postid=1171255 (registration is free)
when HTTP_REQUEST {
# Check if there is a JSESSIONID cookie
if {[HTTP::cookie "JSESSIONID"] ne ""}{
# Persist off of the cookie value with a timeout of 2 hours (7200 seconds)
persist…

NPM is Broken

As someone who bought and implemented NPM solutions, covered them as an analyst, and now watches the industry, one cannot help but notice that NPM(D) is broken. According to Gartner themselves, the data center is rapidly changing, the data center is going away, maybe not as quickly as Capp states, but it’s happening. This is apparent by the massive public cloud growth posted by Amazon, Microsoft, and Google in their infrastructure businesses. This means that traditional appliance-based NPMD offerings will not work, nor will traditional ways of collecting packet data. Many of the flow offerings do not handle the new types of flows which these services generate, but most importantly they do not understand the internet, which is the most important part of assuring services in cloud hosted environments.
The network itself is not just moving to overlay a-la NSX and ACI, it's moving inside of orchestrated containers, and new proxy/load balancing systems typically built off components or …