Skip to main content

Open Source Downsides

Open source is awesome, weather you need a swiss army knife, or some glue between two solutions that don't quite mesh the way you need them. It's also great for analysis and basic tasks that you need at your fingertips. Its versatile and deep in functionality. The best part, is the cost (aside from the legal ramifications of using it commercially). The downside is that working in a large complex environment where I have responsibility across so much diversity and complexity is that many of my needs are not fulfilled to warrant a platform selection:

  1. Manageability
    1. Policy
    2. Templates
    3. Audits
    4. Grouping
  2. Deployment
    1. Ssh
    2. telnet
    3. wmi
    4. remote command
  3. Scale
    1. Distributed systems globally
    2. Failover
  4. Reporting
    1. Complex reporting needs
    2. Logging sophistication

Most of these issues are completely missed in most open source tools, as it turns a swiss army knife into a nuclear weapon. There is nothing wrong with it, but either companies need to understand and implement these around open source tools (like groundworks is attempting to do slowly) or we need to stop using open source management and monitoring tools in large scale environments.

I really hope that these companies take a stand and do this, as it will help reduce cost, increase choice, and make a better more maintainable system for me to manage and implement. The flexibility is key, and open source is the king of flexibility.

Comments

Anonymous said…
Hi there:

I enjoy your blog very much. I am in a similar position at a tech company out west and we looked at some open source monitoring tools OpenSMART--which is made here locally at Stanford. It was good for small silos but really got bogged down when we tried to scale them out. We ended up going with a vendors package when senior management starting screaming about the lack of reports and scalability concerns with the open source tools. What are your thoughts on that?
Unknown said…
Scalability is a major concern, but tools like Nagios, Zenoss, and some of the other tools have proven scalability beyond 20,000 nodes.

Reporting is also a concern, but can be augmented with many of the commerical reporting or BI tools on the market. As for the data coming off agents, its most often consumed by other tools.

Which vendor product did you end up going with over OpenSMART?
Anonymous said…
We actually went with Openview. I was a little hesitant at first, but we had a history with them--we like the local team, and senior management had a positive view of the solution.

At the end of the day, the consensus for our team was to use a opensource tools for small pockets of non-critical systems because they are so cheap, and then write a contract with the business to have them fund the OV agents for their critical apps. They needed to put in 500k+ for this...but got it done. Keep up the great work.

Popular posts from this blog

Dynatrace Growth Misinformation

For my valued readers: I wanted to point out some issues I’ve recently seen in the public domain. As a Gartner analyst, I heard many claims about 200% growth, and all kind of data points which have little basis in fact. When those vendors are asked what actual numbers they are basing those growth claims on, often the questions are dodged. Dynatrace, recently used the Gartner name and brand in a press release. In Its First Year as an Independent Company, Gartner Ranks Dynatrace #1 in APM Market http://www.prweb.com/releases/2015/06/prweb12773790.htm I want to clarify the issues in their statements based on the actual Gartner facts published by Gartner in its Market Share data: Dynatrace says in their press release: “expand globally with more than three times the revenue of other new generation APM vendors” First, let’s look at how new the various technologies are: Dynatrace Data Center RUM (DCRUM) is based on the Adlex technology acquired in 2005, but was cr...

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later. Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not? Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting wi...

IBM Pulse 2008 - Review

I spent Monday-Wednesday at IBM Pulse in Orlando. It was a good show, but quite a few of the sessions were full when I arrived. It was frustrating because they didn't offer them more than once. The morning sessions were mostly pie in the sky, and not very useful to me. I got to spend a lot of time with senior people in engineering, architecture, and acquisitions/strategy. I also got to meet people I knew from online or other dealings with IBM. Overall, the show was a good use of my time, and I found it enjoyable. Here are some of my highlights: ITM 6.2.1 improvements including agentless capabilities and such. New reporting framework based on BIRT which will be rolling forward. New UI which is being pushed and was on display from TBSM 4.2. Hearing about what other customers are up to (mostly bad decisions from what I've seen). Affirmation of ITNM (Precision) as a best of breed tool, with a excellent roadmap. Some things which are bad and make no sense: Focus on manufactur...