Skip to main content

Open Source Downsides

Open source is awesome, weather you need a swiss army knife, or some glue between two solutions that don't quite mesh the way you need them. It's also great for analysis and basic tasks that you need at your fingertips. Its versatile and deep in functionality. The best part, is the cost (aside from the legal ramifications of using it commercially). The downside is that working in a large complex environment where I have responsibility across so much diversity and complexity is that many of my needs are not fulfilled to warrant a platform selection:

  1. Manageability
    1. Policy
    2. Templates
    3. Audits
    4. Grouping
  2. Deployment
    1. Ssh
    2. telnet
    3. wmi
    4. remote command
  3. Scale
    1. Distributed systems globally
    2. Failover
  4. Reporting
    1. Complex reporting needs
    2. Logging sophistication

Most of these issues are completely missed in most open source tools, as it turns a swiss army knife into a nuclear weapon. There is nothing wrong with it, but either companies need to understand and implement these around open source tools (like groundworks is attempting to do slowly) or we need to stop using open source management and monitoring tools in large scale environments.

I really hope that these companies take a stand and do this, as it will help reduce cost, increase choice, and make a better more maintainable system for me to manage and implement. The flexibility is key, and open source is the king of flexibility.

Comments

Anonymous said…
Hi there:

I enjoy your blog very much. I am in a similar position at a tech company out west and we looked at some open source monitoring tools OpenSMART--which is made here locally at Stanford. It was good for small silos but really got bogged down when we tried to scale them out. We ended up going with a vendors package when senior management starting screaming about the lack of reports and scalability concerns with the open source tools. What are your thoughts on that?
Unknown said…
Scalability is a major concern, but tools like Nagios, Zenoss, and some of the other tools have proven scalability beyond 20,000 nodes.

Reporting is also a concern, but can be augmented with many of the commerical reporting or BI tools on the market. As for the data coming off agents, its most often consumed by other tools.

Which vendor product did you end up going with over OpenSMART?
Anonymous said…
We actually went with Openview. I was a little hesitant at first, but we had a history with them--we like the local team, and senior management had a positive view of the solution.

At the end of the day, the consensus for our team was to use a opensource tools for small pockets of non-critical systems because they are so cheap, and then write a contract with the business to have them fund the OV agents for their critical apps. They needed to put in 500k+ for this...but got it done. Keep up the great work.

Popular posts from this blog

Dynatrace Growth Misinformation

For my valued readers: I wanted to point out some issues I’ve recently seen in the public domain. As a Gartner analyst, I heard many claims about 200% growth, and all kind of data points which have little basis in fact. When those vendors are asked what actual numbers they are basing those growth claims on, often the questions are dodged. Dynatrace, recently used the Gartner name and brand in a press release. In Its First Year as an Independent Company, Gartner Ranks Dynatrace #1 in APM Market http://www.prweb.com/releases/2015/06/prweb12773790.htm I want to clarify the issues in their statements based on the actual Gartner facts published by Gartner in its Market Share data: Dynatrace says in their press release: “expand globally with more than three times the revenue of other new generation APM vendors” First, let’s look at how new the various technologies are: Dynatrace Data Center RUM (DCRUM) is based on the Adlex technology acquired in 2005, but was cr

Vsphere server issues and upgrade progress

So I found out that using the host update tool versus Vcenter update manager is much easier and more reliable when moving from ESXi 3.5 to 4.0. Before I was using the update manager and it wasn't working all that reliably. So far I haven't had any issues using the host update tool. I've done many upgrades now, and I only have 4 left, 3 of which I am doing this weekend. Whenever I speak to vmware they always think I'm using ESX, when I prefer and expect that people should move to the more appliance model of ESXi. With 4.0 they are pretty much on par, and I'm going to stick with ESXi. On one of my vsphere 4.0 servers (virtualcenter) its doing this annoying thing when I try to use the performance overview:   Perf Charts service experienced and internal error.   Message: Report application initialization is not completed successfully. Retry in 60 seconds.   In my stats.log I see this.   [28 Aug 09, 22:28:07] [ERROR] com.vmware.vim.stats.webui.startup.Stat

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later. Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not? Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting wi