Thursday, December 24, 2015

Industry Insights: My thoughts from AppSphere 15

Stuck in Vegas for two weeks, longing to be back home, it was a trial of might and perseverance. The first week was spent at AppDynamics AppSphere, and the second at the Gartner Data Center, Infrastructure & Operations Management Conference. (Look for a post next week on the Gartner conference.)
The second annual AppSphere drew 1500+ attendees, doubling the 2014 attendance. There was lots of passion and wonderful user engagement, but most importantly lots of exciting product announcements. The change from 2014 to 2015 was noticeable and impressive.
Having spoken at the first AppSphere on behalf of Gartner and feeling the energy at the conference, the momentum and acceleration one year later was apparent in the content, scale, and depth of the conference. In 2015, AppSphere expanded from one track to four tracks, and there was a large increase in customer and partner speakers. Product announcements included Browser Synthetic Monitoring, Server Infrastructure Monitoring, and new enhancements for session management, the C/C++ SDK, and many new AWS extensions. The most surprising feature customers were excited about was the ability to tie log messages automatically to the transactions that generate them, providing context that has been missing in the ITOA industry.
On the sessions side, there was a highly engaging customer presentation from HBO (must watch if you are a Game of Thrones fan). John Feiler, senior staff engineer, provided insight on the challenges of video streaming at scale. AppDynamics was used to correct issues before the season five premiere, and is used through the application development and operations lifecycle. This ensures a high quality viewing experience for all customers (including me).
Other great sessions included Pearson’s Mike Jackson and Tim Boberg. Pearson is a global leader in learning, and they use AppDynamics across the enterprise, serving over 1.3 million daily logins via over one billion page views every two months. Pearson was challenged in providing a standard way to see and monitor across many different technology stacks. Their tool of choice is AppDynamics.
BarclayCard’s Peter Gott explained how they were able to remove silos and get the organization to modernize and share data effectively using AppDynamics technologies, working across a highly variable technology stack and ultimately greatly improving user experience and uptime. Barclaycard has many more digital initiatives, including wearables and new mobile payment technologies, where AppDynamics plays a key role.
There were many other compelling and informative sessions. I wish I could have attended more of these —  the audience loved hearing from customers, as did I!
There were also two great panels. I was honored to chair the microservices panel, featuring product managers from Red Hat, Microsoft, Google, and a heavy microservices user, DreamWorks studios. My colleague Prathap Dendi led a panel on IoT, which was particularly interesting, featuring customers from Garmin, Tesla, and SmartThings. Our partner Red Hat also participated in the IoT panel.
Partner participation included RedHat, Microsoft, Trace3, Bigpanda, Apica, ExtraHop, Capgemini, Scicom, Column Technologies, Moogsoft, xMatters, Electric Cloud, Neotys, Orasi, and mainframe partner DG Technology Consulting. We thank them and look forward to working with all of our existing and new partners throughout 2016.
We are looking forward to another record breaking AppSphere conference in November 14-17th 2016, we expect to once again double attendance, fingers crossed.

Thursday, November 19, 2015

Industry Insights: Gartner Survey Analysis Critical Dimensions in APM

Gartner analyst Cameron Haight did a survey Survey Analysis: End-User Experience Monitoring Is the Critical Dimension for Enterprise APM Consumers (Gartner clients only) over the summer across users about their APM implementations. There are several key areas where the data is quite telling.

Users main concerns

Users of APM tools based on this Gartner survey, and in my experience indicate 3 primary issues with APM tools.
  • APM tools are too expensive
This is a valid concern, APM is not inexpensive, but when compared to the cost of APM 7 years ago, the license costs fallen over 75%. Typical APM tools would cost $15,000 – $20,000 per server. Today we are looking at pricing closer to $4,000. These are list prices, which we know people do not necessarily pay. Additionally the time to value, implementation time, and spend for services which used to be typical in the past are a fraction of what they were once. I would estimate the cost of deployment and maintenance with a modern APM tool is 10% of the work of a legacy tool. This cost savings is not only hard dollar amount but results in faster ROI, and a shorter time to value.
  • APM tools are not well integrated
While this complaint is certainly the norm in the industry, at AppDynamics our unified monitoring approach addresses this concern. Building a single user interface and single installation for the platform which is leveraged across the monitoring capabilities. Additionally, by building integrations and facilitating an open ecosystem, connecting systems together becomes easier than the past. With modern software applications this has changed significantly.
  • Tools are too complex
Deployment, and usability within tools has been a major issue. At AppDynamics, we address this issue with our modern web based UI, easy deployment, and platform administration. This is coupled with a much smaller footprint for our controller (the monitoring server we require), requiring less resources to deploy (or just use SaaS and you don’t have to deal with the controller at all).

Breadth of APM implementation

25% of the 207 respondents say that APM is on 25% or more of their applications. I would have guessed this number to be much smaller. The question was not very clear if it was targeting critical applications or all applications. Users also have misconceptions about what APM is. When a server monitoring product can capture metrics about applications or application health it doesn’t make the tool an APM tool. Similarly, the use of older synthetic monitoring tools often qualifies for most as APM, when it does nothing to monitor the performance of users or transactions.

APM Buyers

APM tools are bought by a lot of different people in most organizations, we see everything from single user buying, departmental buying, and enterprise-wide buying. For many of the reason outlined above the enterprise motions tend to happen over time as APM gains traction. Most APM tools 5+ years ago were bought by landing big enterprise deals, but today most land in smaller deals and expand across  enterprises. There have been many swings of the pendulum between suites of tools, mini-suites of tools, and best of breed. We’ve been in a best of breed cycle for the last 5 years, and this is likely due to the level of importance technology is playing within businesses as they digitally transform.
Based on Gartner survey data, buyers are opting for best of breed 59% of the time, which seems surprisingly low based on my conversations, and the fact that suite providers are significantly behind market leaders within APM. The only explanation for this is that those taking the survey may not currently have active APM initiatives, or do not understand what is and what is not APM. Haight then goes on to state “Enterprise APM consumers should deploy best-of-breed approaches as skills and finances dictate, but make sure to account for the potentially higher costs of integration.”. This is another area which large suite vendors sell on powerpoint, but upon implementation the integration is extremely challenging. The integration within suites of tools is incredibly expensive and complex to deploy and maintain, most organizations require heavy services outlay to implement and maintain integrations, even with suites of tools. These large suite vendors provide illusions of well integrated tooling, but based on first hand knowledge this is not the case.
The buyers are broken down as follows, which is accurate based on my time as an analyst and my time at AppDynamics.
  • IT Operations = 67%
  • App Support = 11%
  • App Dev = 8%
  • Other teams = 15%
The App support function is often part of IT Operations, hence differentiating between these two teams is often difficult.
Users also express concerns for legacy APM solutions to deal with large transformational shifts occurring through digital transformation. Once again, many vendors which sell legacy tooling are now messaging towards digital initiatives, when their tooling is not suited to accomplish these things.
Major shifts in the way software is designed, implemented, and analyzed are also shifting. The Gartner survey points out shifts most concerning to users include IoT, Cloud (SaaS), Mobile, and Microservices. There is clearly a large market opportunity for visibility in these areas, especially SaaS and IoT where APM vendors have limited ability to support these environments today. Although AppDynamics has many great customer use cases in Microservices, Mobile, IoT, and Cloud there is still a lack of technology with APM vendors to handle deep monitoring of SaaS along with having limitations when instrumenting IoT devices built on microcontroller architectures. These all present themselves as areas where our innovation and focus will create new ways to create visibility.
The key takeaway is that APM still has a long way to go to be as penetrated as it needs to be, and there are areas where major innovation has yet to occur to deal with new computing platforms, new connected devices, and new software architectures which will drive the next evolution of computing, data capture, and analytics.

Friday, October 16, 2015

What CIOs need to take away from Gartner Symposium


Gartner Symposium is the high level CIO conference to attend. In this venue Gartner unveils new thinking including new predictions, surveys, and other insights into how people and technology will solve the problems of the future.

Highlights

Introduction to the algorithmic economy

The kickoff keynote by Peter Sondergaard the SVP leading Gartner’s Research business begun by imploring businesses to invest and create algorithms as the key unlocking insight into the vast amounts of data collected and generated. Peter went on to say how algorithms describe the way the world works, but in software, this is key to allowing systems to exchange data with one another (otherwise known as M2M). These algorithms will be used by agents such as Google Now, Siri (Apple), Cortana (Microsoft), and Alexa (Amazon). These are highly advanced algorithmically driven agents which use vast amounts of data both publicly and privately to provide an interactive and intuitive interface allowing them to predict what a user would want. These types of agent systems will define the post app world. Peter went on to describe how these opportunities are the future of computing, and represent a $1 trillion business opportunity.

The Robots Are Here

Darryl Plummer presented the trends for 2016 and beyond. They included the movement towards automated agents or robots. This includes what Gartner calls the digital mesh (devices, ambient experiences, and 3d printing) along with the application of smart machines (information, machine learning, and autonomous agents and things). These major changes will underpin the new reality IT must exist within. The cultural issues which I’ve seen transpiring in the media is the relationship between people and machines, right now this relationship is cooperative. The future holds codependency and ultimately competitiveness says Darryl.
The prediction is that by 2018 20% of all business content will be authored by machines. These machines will be fed by the IoT ecosystem of sensors, transport, and analytics. Gartner predicts that 1 million IoT devices will be purchased every hour in 2021. These not only create tremendous opportunity, but also incredible amounts of risk both in terms of personal information and security. Darryl goes with several daunting predictions around risk and security which will occur due to this digitization.
The final and most interesting trend which came up several times throughout Symposium is the use of smart agents as I mentioned above, Gartner predicts by 2020 smart agents will facilitate 40% of mobile interactions which will be the start of the post-app era.

2016 CIO Agenda and Survey

Dave Aron spoke about the freshly published 2016 CIO agenda and CIO survey. There was quite a bit of evolution here, Gartner introduced the digital platform:
The CIO survey included 2,944 CIOs managing $250B in IT spend. Clearly the revenue is already shifting towards digital today with CIOs estimating 16% of revenues are digital today. This is just the beginning of the journey, in 5 years the estimate is 37% of revenues will be digital. These digitalization changes are done to build efficiencies in operations, along with direct revenue generation.
The platform Dave laid out included aspects of bimodal delivery (on the bottom of the diagram above). The talent platform was one of the largest struggles CIOs mentioned in the survey. Skills shortages were the critical gap, and the talent shortage is being felt by CIOs. With fewer graduates in Computer Science over the last decade this shortage is likely to increase and continue.
The leadership platform also included insights into the chief digital officer. Dave touched on the fact that CDOs are being put into place, but at a slower rate than Gartner had predicted. This means that CIOs are taking on a lot of digital transformation responsibilities based on the survey data.

Private Cloud

I attended a session led by Donna Scott focused on private cloud. Based on Donna’s polling at Gartner infrastructure and operations management in June of this year 40% of respondents said they would put 80% of workloads in private cloud and 20% in public cloud. The main reasons were agility, speed, compliance, security, application performance, protecting IP, reducing costs, and dealing with politics. She then highlighted common use cases for hybrid cloud, or the use of public and private clouds at the same time. These can be split into more simplistic use cases such as running development in public cloud, while production is in private cloud or advanced use cases like active/active application resiliency.
Most successful private cloud implementations are done by type A companies who embrace hyperscale and web scale computing. Donna highlighted examples from companies such as Paypal, DirecTV, Chevron, McKesson, and Cox Automotive. The progression most companies make in cloud is highly tied to application maturity:

Jeffrey Immelt — GE CEO

The interview of GE CEO Jeffrey Immelt was quite interesting. GE has over 15,000 software engineers. GE is changing its business dramatically to build a platform for the industrial internet. Immelt believes that if GE can build and create the platform to use this data and the analytics to create greater efficiencies around industry (such as transportation where GE is one of the largest manufacturer of jet engines and rail locomotives) they can create a digital differentiator within their products, and allow other manufacturers to use their platform. He also outlined how the analytics will be used to create a “digital twin” for the physical assets within the analytics layer including all relevant data and metrics for each instance of machinery. One great quote resonated with me after seeing all of the M&A within the IT Operations space: “In 20 years of M&A, more value was destroyed than created by software acquisitions”
I also tried out the new Microsoft HoloLens, which was pretty nifty. We look forward to supporting that technology within AppDynamics as it matures and is brought to the enterprise market, which is where Microsoft has been focusing.
Symposium is a great high level conference, having spoken here twice but never able to attend the sessions I’ve gotten a new appreciation for this event both in terms of breadth and quality. I still believe that it’s often too broad for many vendors and attendees, but it’s a wonderful overview to what’s happening in our ever growing world of IT complexity and nuance.

Sunday, October 11, 2015

Software will power the Internet of Things

Today’s connected world is moving from devices towards things, what this means is that by using increasingly low cost sensors embedded in devices we can create many new use cases. These span across use cases in cities, vehicles, home, offices, factories, retail environments, worksites, health, logistics, and health. These use cases rely on ubiquitous connectivity and generate massive amounts of data at scale. These technologies enable new business opportunities, ways to optimize and automate, along with new ways to engage with users.
These technologies have been enabled by a perfect storm of technologies converging. They include both hardware, transport, and analytics.
  • Inexpensive sensors – As highlighted by this research by the ITAC http://itac.ca/uploads/events/execforum2010/rob_lineback_10-6-10-2.ppt sensor prices have been continually dropping.
  • Ubiquitous internet access – Powered by the pervasive mobile technology devices and sensors can be connected to the internet. Less than a decade ago, the powerful computers in our pockets did not exist. Powered by standard connectivity and protocols such as Bluetooth, Wifi, and NFC driven by protocols such as Zigbee and ubiquitous APIs.
  • Cloud technology – High speed on demand processing, storage and capabilities enabled by public cloud create the backbone for information collection and analysis on demand. These resources and platforms are easily accessible to all to collect data and provide insight into the usage of the thing.
What is the glue which makes all of this possible? Software is the key to IoT, this makes everything function together and creates these new capabilities and opportunities. This is why we believe seeing inside the software is key to visibility for purposes of troubleshooting and creating insight into the IoT. The complexity and scale issues presented by IoT on both the backend (in the cloud) and the frontend (things themselves) is a major challenge for not only the systems themselves, but for the management tooling of these interconnected and fluid systems.
Other key cautions for IoT include:
  • Software is not being managed properly, both in terms of availability and performance.
  • Data ownership of the data collected and mined.
  • Data security of collected data which can be used for malicious purposes.
  • Battery technology has largely not evolved for 15 years or more, this is a major limitation to today’s devices and connected things.
As a company AppDynamics believes that IoT will be a key part of computing and interconnected systems of the future, our customers are increasingly applying our technologies to these use cases, and we look forward to becoming an integral part of both collecting and analyzing data within these systems.

Wednesday, August 5, 2015

Industry Insights: Regulating failure (Reg SCI)

When examining the complexity in today’s applications and environments, and why APM technologies are becoming more critical by the day, those responsible for an application’s lifecycle must understand what an application does. Aside from providing application visibility, APM tools help troubleshoot issues. The failure to see and troubleshoot is a constant struggle for those supporting applications. When I speak publicly I’m always able to point to specific instances of failure which affect each of us; this month the hot button items were travel issues regarding airline IT systems, stock market failures, and others.
Within the securities market, the SEC has adopted regulations that attempt to improve US securities markets’ ability to handle systems compliance and maintain integrity. To that effect, on November 19th, 2014, the SEC approved the adoption of the Regulation Systems Compliance and Integrity (Reg SCI) under the Securities Exchange Act of 1934. The regulation requires compliance by November 2015, in a few short months. This new regulation was specifically created to prevent or better handle issues and incidents related to flash traffic crashing exchanges, security breaches, and other areas of system resilience. The financial markets are increasingly interconnected, making cascading issues a reality. These regulated entities include FINRA, trading systems, plan processors, and clearing houses. The requirements of Reg SCI include creating procedures, executing testing, monitoring effectively and reporting data and status to the SEC. The reporting must be done on a regular basis, and when major systems changes occur. The entities covered by the new reg SCI mandates must also perform annual reviews including the testing of disaster recovery procedures of secondary sites, and their ability to handle the same volume of transactions with the same responsiveness as the primary sites. The focus is primarily on production systems, but also includes development and testing processes.
When outages do occur, there are specific provisions as to what must be reported, including the root cause of these outages. This helps share the reason for issues which affect the technology that powers the financial markets.
In regards to what APM focuses on, the regulation requires that capacity planning must be accomplished, but interestingly the capacity planning must be focused on transaction accuracy and timeliness to ensure market integrity. Most IT Operations professionals focus on infrastructure capacity planning, but this regulation clearly shifts that focus to the application layer. Stress testing must also be accomplished with major changes, once again requiring measurements. Reg SCI specifically notes that the monitoring of any 3rd party provided software or services, and how those systems perform is a requirement. Capabilities around monitoring availability and performance of these services is an APM technology capability, as 3rd party performance often affects application performance and proper execution.
AppDynamics is a trusted APM provider in many of the world’s largest banks and exchanges, and many more globally. We are also used within several companies which fall under Reg SCI. As a result, many of our customers are reaching out to us in order to comply with this new regulation. We’re pleased to discuss how we can help, and how AppDynamics is evolving to handle new types of capacity planning models in the future.

Friday, June 19, 2015

Dynatrace Growth Misinformation

For my valued readers: I wanted to point out some issues I’ve recently seen in the public domain. As a Gartner analyst, I heard many claims about 200% growth, and all kind of data points which have little basis in fact. When those vendors are asked what actual numbers they are basing those growth claims on, often the questions are dodged.

Dynatrace, recently used the Gartner name and brand in a press release.


I want to clarify the issues in their statements based on the actual Gartner facts published by Gartner in its Market Share data:

Dynatrace says in their press release:
“expand globally with more than three times the revenue of other new generation APM vendors”

First, let’s look at how new the various technologies are:
  1. Dynatrace Data Center RUM (DCRUM) is based on the Adlex technology acquired in 2005, but was created in the late 1990s. The DCRUM architecture has largely unchanged since the acquisition. The UI however has evolved over time.
  2. Dynatrace Synthetic Monitoring is based on the acquisition of Gomez in 2009; that technology was similarly built in the late 1990s. The user interface has evolved, along with the use of the technologies acquired with Proxima technologies in 2007 to use for Business Service Management (BSM). Dynatrace did try to sell BSM for several years, but has moved away from that; they do still however use the technology within their synthetic monitoring solution..
  3. Dynatrace Application Monitoring was acquired ($256m) in 2011, and also has a separate user interface.
  4. Ruxit is an organically developed and modern offering with a modern user interface, which was released in 2014.
  5. It was announced this week that Keynote will be merging into Dynatrace Synthetic, giving yet another user interface for synthetic monitoring which was developed in the 1990s.

These products are not modern. Dynatrace is not a “new generation” company, as in reality much of its revenue comes from technologies more than a decade old. Ruxit is clearly a “new generation” product, but has little revenue since it’s  a new offering which is SaaS only.

“Dynatrace’s 2014 growth rate increased to 20.6%, nearly three times higher than the previous year’s growth and 24% faster than the overall global APM market.”

According to Gartner market data (“Market Share: All Software Markets, Worldwide, 2014”) Dynatrace’s growth rate in 2013 was 20.5%, while the overall APM market grew only 13%

In 2014, the Gartner published growth rate for Dynatrace was 20.6%, while the overall market grew 15.8%. So it can be approximated that three-quarters of Dynatrace’s growth was the overall growth of the market, from which all APM vendors benefited. Based on this, a reasonable inference is that Dynatrace’s growth numbers benefitted substantially as a result of the overall market growth in these periods.

Wanted to set the record straight, and use the data to shed some additional light on the Dynatrace press release.

Disclaimer: This is my personal website and reflects my views and opinions only. Any comments made on this website by myself or by third parties do not necessarily reflect views or opinions of my employer.

Wednesday, May 20, 2015

Performance Monitoring, another view

Originally posted on devops.com : http://devops.com/2015/05/20/performance-monitoring-another-view/

This is my first post to DevOps.com, for those who haven’t read my other writings, I currently work as the VP of Market Development and Insights at AppDynamics. I joined in February 2015 after being at Gartner for 4 years as a Research VP covering all things monitoring. During my time as a Gartner analyst I would help buyers make decisions about purchasing monitoring software. When I read articles on the internet which blatantly disregard best practices I either smell that something is fishy (sponsorship) or they just didn’t follow a process. Conversely some people do follow a proper process. I specifically wrote this up due to reading a really fishy article.

Step 1 – Admit it

The first step to determining you have a problem is admitting it. Most monitoring tool buyers realize they have too many tools, and none of them help isolate root cause. This is a result of most buying happening at the infrastructure layer, when the application layer is either lightly exercised synthetically or disregarded. People soon realize you must find an application layer monitoring tool, or write your own instrumentation. The vendors who sell logging software would love for you to generate a lot of rich logs, so they can help you solve problems, but realistically this doesn’t work in most enterprises as they are using a combination of custom and packaged software, and forcing developers to write consistent logging is a frustrating and futile task.

Step 2 – Document it

The next phase is to document what you are using today, most organizations have over 10 monitoring tools, and they typically want to buy another one. As they consider APM tools they should document what they have today, and determine a plan to consolidate and eliminate redundant tools. Having tools within silos will happen until you can fix the organizational structure.

Step 3 – Requirements

I can’t tell you how many times vendors set requirements for buyers. You can tell when vendors do this versus actually understanding the requirements the user has, and if the solution will scale and be operable by the team. Purchasing a tool which requires full time consultants on staff to keep it running is no longer feasible today, sorry to the old big ITOM vendors (formerly the big-4) but people don’t want that anymore.

Step 4 – Test it!

If you are evaluating software you MUST test it, and make sure it works in your environment. Better yet load test the software to determine if the vendors claims of overhead and visibility are in fact true. I see far too many people buy software they do not test.
Getting back to the article which prompted this post:
The author is located in Austria and I believe there is a connection to Dynatrace. Dynatraces’ R&D and technical leadership are in Austria, but this is just conjecture. The article is written with a bias, which is obvious to any reader who has used these tools. The evaluator did not test these products, or if he did he didn’t have a testing methodology.
Finally the author’s employer is new and all employees have come from Automic Software in the past year. I don’t need to explain to those who work with Automic the state of things there. http://www.glassdoor.com/Reviews/Automic-Software-Reviews-E269431.htm#
I think I’ve spent enough time on the background, here are my specific issues with the content.
The screenshots are taken from various websites, and are not current or accurate. If someone writes an evaluation of technology, at least test it, and use your screenshots from the testing!
This post was specifically to discuss DevOps, but yet the author doesn’t discuss the need to monitor microservices or asynchronous application patterns, which are clearly the current architecture of choice. This is especially the case in DevOps or continuous release environments. In fact Microsoft themselves announced new products last week at Ignite including nano server, and Azure Fabric Service for building these patterns. The difficulty with microservices is that each request externally spawns a large number of internal requests, these often start and stop out of order creating issues when monitoring these. This causes major issues for APM products, and very few handle this effectively today.
The article calls out user interface and deployment models inconsistently, and fails to mention Dynatrace is an on-premises software product. Dynatrace is also a thick Java client, built on Eclipse. This is the definition of a heavy client, which the author calls out for other products in this article.
Continuing the incorrect facts in this article the author calls out Dell (formerly Quest) Foglight. This is not Dell’s current product for modern architectures. Dell has moved to a SaaS product called Foglight APM SaaS, which is not what was evaluated. The evaluator should look at the current product (noticing a trend here).
Finally the AppDynamics review is inaccurate:
Editors note: the following is this authors opinions, not the opinions of DevOps.com.  As the author states, he is employed by AppDynamics.
“code level visibility may be limited here to reduce overhead.”
The default in order to scale AppDynamics to monitor thousands of systems with one single on premises controller (something no other APM vendor delivers) is done based on advanced smart instrumentation, but every transaction is monitored and measured. All products do  limit the amount of data they request in some way. AppDynamics has several modes including one which captures full call traces for every execution.
“ lack of IIS request visibility”
AppDynamics provides excellent visibility to IIS requests, I’m not sure what the author means here. AppDynamics also supports implementation on Azure using nuget.
“features walk clock time with no CPU breakdown”
Once again, I’m not sure what the author is referring to here. The product provides several ways of viewing CPU usage.
“Its flash-based web UI and rather cumbersome agent configuration”
The agent installation is no different than other products, showing the author has not installed the product. Finally the product is HTML5, and has been for a while, there are still some flash views in configuration screens, but those are being removed with each release. I’d much rather have a web based UI with a little remaining flash than a fat client UI requiring a large download.
AppDynamics has a single UI which goes far deeper into database performance than the other products in this review. AppDynamics also provides far more capabilities than what was highlighted in this article including synthetic. AppDynamics provides deployment of the same software via SaaS or on premises installation.
Finally this review did not look at analytics, which is clearly an area of increasing demand within APM. That being said this review is far from factual or useful.
Hopefully this sets some of this straight, please leave comments here, or @jkowall on twitter!
Thanks.