Skip to main content

HP Upgrades – QC,QTP,PC

Before I go into these 3 suites of tools, when is HP finally going to update BAC, RUM, or Diagnostics? It seemed like these tools have been really stagnate the last 3 years. I’m not going to HP Software Universe anymore, and I keep getting new account reps, so I have no idea what the roadmap is these days. That’s enough HP bashing, onto the good stuff.

We did the yearly HP upgrade over the last couple weeks, here is the rundown on the technologies and what was involved in each:

1. QTP – Very easy upgrade, did this on our terminal server and a desktop. The license server is now supported on 64-bit machines, so we moved the license server off an older 2003 box onto a 2008R2 system. No issues with the upgrade, and there seems to be a lot of improvements. Still waiting for feedback from our QA team on the improvements.

2. PC – Ended up building a couple new VMs for this, as we moved it onto Windows 2008R2 (64-bit) as well. There were no issues with the reinstall or moving our scripts and data over to the new systems. The tool itself didn’t have a lot of changes, but the fact that it runs on 64-bit is a good step to us getting rid of our 2003 systems.

3. QC – This one is the problem child. Initially we were going to be doing a larger rollout of QC10, so we built 2 VMs. One was for the DB and one for the app. The DB needed to be on an older OS and such, which was annoying. I ended up reinstalling it onto a Windows 2008 (32-bit) system along with moving to SQL 2008. They don’t yet support SQL 2008 R2 or Windows 2008 R2 from looking at the documentation. It was not all that clear. I have a case open with HP, as during the setup it doesn’t seem to want to connect to the database. I have checked the SQL Server TCP settings, and verified the login/password both locally and over the network. More on this one as HP helps me with the issues.

Sorry for the lack of updates, I should have a few posts coming up now.

Comments

Popular posts from this blog

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later. Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not? Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting wi

NPM is Broken

As someone who bought and implemented NPM solutions, covered them as an analyst, and now watches the industry, one cannot help but notice that NPM(D) is broken. According to Gartner themselves, the data center is rapidly changing, the data center is going away, m aybe not as quickly as Capp states, but it’s happening. This is apparent by the massive public cloud growth posted by Amazon, Microsoft, and Google in their infrastructure businesses. This means that traditional appliance-based NPMD offerings will not work, nor will traditional ways of collecting packet data. Many of the flow offerings do not handle the new types of flows which these services generate, but most importantly they do not understand the internet, which is the most important part of assuring services in cloud hosted environments. The network itself is not just moving to overlay a-la NSX and ACI, it's moving inside of orchestrated containers, and new proxy/load balancing systems typically built off component

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging): http://devcentral.f5.com/Default.aspx?tabid=53&view=topic&postid=1171255 (registration is free) when HTTP_REQUEST { # Check if there is a JSESSIONID cookie if {[HTTP::cookie "JSESSIONID"] ne ""}{ # Persist off of the cookie value with a timeout of 2 hours (7200 seconds) p