Skip to main content

Industry Insights: My thoughts from AppSphere 15

Stuck in Vegas for two weeks, longing to be back home, it was a trial of might and perseverance. The first week was spent at AppDynamics AppSphere, and the second at the Gartner Data Center, Infrastructure & Operations Management Conference. (Look for a post next week on the Gartner conference.)
The second annual AppSphere drew 1500+ attendees, doubling the 2014 attendance. There was lots of passion and wonderful user engagement, but most importantly lots of exciting product announcements. The change from 2014 to 2015 was noticeable and impressive.
Having spoken at the first AppSphere on behalf of Gartner and feeling the energy at the conference, the momentum and acceleration one year later was apparent in the content, scale, and depth of the conference. In 2015, AppSphere expanded from one track to four tracks, and there was a large increase in customer and partner speakers. Product announcements included Browser Synthetic Monitoring, Server Infrastructure Monitoring, and new enhancements for session management, the C/C++ SDK, and many new AWS extensions. The most surprising feature customers were excited about was the ability to tie log messages automatically to the transactions that generate them, providing context that has been missing in the ITOA industry.
On the sessions side, there was a highly engaging customer presentation from HBO (must watch if you are a Game of Thrones fan). John Feiler, senior staff engineer, provided insight on the challenges of video streaming at scale. AppDynamics was used to correct issues before the season five premiere, and is used through the application development and operations lifecycle. This ensures a high quality viewing experience for all customers (including me).
Other great sessions included Pearson’s Mike Jackson and Tim Boberg. Pearson is a global leader in learning, and they use AppDynamics across the enterprise, serving over 1.3 million daily logins via over one billion page views every two months. Pearson was challenged in providing a standard way to see and monitor across many different technology stacks. Their tool of choice is AppDynamics.
BarclayCard’s Peter Gott explained how they were able to remove silos and get the organization to modernize and share data effectively using AppDynamics technologies, working across a highly variable technology stack and ultimately greatly improving user experience and uptime. Barclaycard has many more digital initiatives, including wearables and new mobile payment technologies, where AppDynamics plays a key role.
There were many other compelling and informative sessions. I wish I could have attended more of these —  the audience loved hearing from customers, as did I!
There were also two great panels. I was honored to chair the microservices panel, featuring product managers from Red Hat, Microsoft, Google, and a heavy microservices user, DreamWorks studios. My colleague Prathap Dendi led a panel on IoT, which was particularly interesting, featuring customers from Garmin, Tesla, and SmartThings. Our partner Red Hat also participated in the IoT panel.
Partner participation included RedHat, Microsoft, Trace3, Bigpanda, Apica, ExtraHop, Capgemini, Scicom, Column Technologies, Moogsoft, xMatters, Electric Cloud, Neotys, Orasi, and mainframe partner DG Technology Consulting. We thank them and look forward to working with all of our existing and new partners throughout 2016.
We are looking forward to another record breaking AppSphere conference in November 14-17th 2016, we expect to once again double attendance, fingers crossed.


Popular posts from this blog

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later.
Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not?
Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting with an API, t…

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging): (registration is free)
# Check if there is a JSESSIONID cookie
if {[HTTP::cookie "JSESSIONID"] ne ""}{
# Persist off of the cookie value with a timeout of 2 hours (7200 seconds)

NPM is Broken

As someone who bought and implemented NPM solutions, covered them as an analyst, and now watches the industry, one cannot help but notice that NPM(D) is broken. According to Gartner themselves, the data center is rapidly changing, the data center is going away, maybe not as quickly as Capp states, but it’s happening. This is apparent by the massive public cloud growth posted by Amazon, Microsoft, and Google in their infrastructure businesses. This means that traditional appliance-based NPMD offerings will not work, nor will traditional ways of collecting packet data. Many of the flow offerings do not handle the new types of flows which these services generate, but most importantly they do not understand the internet, which is the most important part of assuring services in cloud hosted environments.
The network itself is not just moving to overlay a-la NSX and ACI, it's moving inside of orchestrated containers, and new proxy/load balancing systems typically built off components or …