Thursday, December 30, 2010

Dealing with Information Overload

I follow a lot of different sites.  I tend to read my news every other day.  I'm always trying the best tools to deal with my data on multiple devices (iphone, ipad) and multiple computers (google crome OS, Windows).  I have found the best combination to be the following:

1.   Master feeds on google reader (works great on any device)
2.   Feeddemon (Windows)

FeedDemon is a great app, it will sync with my google feeds and give me the same view of what I have read and what I have not.  You have a better user interface for getting through all that data.  I also customize the views to work for the way that I want to fly through all my news.  These kinds of user interface tweaks aren't possible with google reader yet, but it will improve.  Google reader is a great app and works great on mobile or desktop browsers.

I always use google chrome as my primary browser, it wins the speed race hands down, and supports the critical extensions I need.  I have moved of Firefox as my primary browser over 6 months ago.  I love the fact that my extensions and data (most of them) sync across every system and even the google netbook.

List of extensions I use:


AdBlock - Version: 2.2.19
Browser Button for AdBlock - Version: 0.0.12
gCast Weather - Version: 2.1.2
Google Translate - Version: 1.2.3.1
Google Voice (by Google) - Version: 2.2.3.4
IE Tab - Version: 1.4.30.4
LastPass - Version: 1.70.11
Lazarus: Form Recovery - Version: 3.0.1
PriceBlink - Version: 2.1
Tweetbeat Firsthand - Version: 0.6.3
Weather Window Beta by WeatherBug - Version: 1.0.5
Woot! - Version: 1.1
Xmarks Bookmark and Password Sync - Version: 0.9.0
Yoono Web - Version: 1.0.0.1

Friday, December 10, 2010

Google Chrome Netbook

I came home today to find a strange looking box with odd shapes on the outside of it. Once I opened it I didn’t expect to find a small netbook there shipped from google. This is the Chrome OS Netbook they shipped to me for free. It’s a nice form factor, not too large and not too small. Once the OS booted up, you just hook it up to wifi, login with your google account, snap a profile picture, and you are off. After some time it upgraded itself, similar to chrome. It required a reboot after the update. It runs very well.

It does a good job with flash and other web media. It took quite a while to get my full extension sync from the PC, but most of my extensions worked without a hitch. The major one which doesn’t work is lastpass, which I really need! I tried many sites on it, and everything looks like it works well. I also installed several “apps” from the store which make for easy access to my google products and other sites I use a lot. It will make a good ipad type tool, something to grab and use. I haven’t setup the broadband yet, but I will soon.

The physical design is nice and very “black” I really like the additional of reload, forward, back, and search buttons. They are handy to have. I would have loved to have a “@” key. There is no need to have caps lock anymore, so it was nice to see it gone. The keyboard keyout is a little “off” for my touch typing, but I will get used to it. The control and alt keys are very large. If you multitouch on the trackpad it scrolls which is pretty handy once you get used to it.

The downsides to the design are the mouse, which allows for left and right click but doesn’t use buttons. It uses the click type pad that the macs use, which is somewhat annoying since I am used to having two hands on the trackpad. One hand for clicking and one hand for moving. If you try to do this with the trackpad it screws it all up. I would love to have page up and page down keys, as well as home/end keys. I am a power user and I use the page up and page down keys even more than the arrow keys when I am using the web.

Overall it’s a cool device and I’m looking forward to using it more extensively.

Great freeware system admin tools

Thanks to the awesome community and software over at spiceworks - www.spiceworks.com I found these great free tools from Netwrix - www.netwrix.com that are superb for any system admin dealing with windows systems. I could have used this fileserver monitor freeware in the past for basic audits. I wish I had known about it sooner. The other really useful tools we are using are the following freeware tools. Essentially the free versions just email you at 3am every night with changes or reports. You can change the schedule using the scheduled task control panel:

Password Expiration Notifier - http://www.netwrix.com/password_expiration_notifier_freeware.html
This tool emails the end users when their passwords are expiring, its good for us because we have some remote users and mac users who do not get notified. This should prevent the lockouts we see when passwords expire.

AD Change Reporter - http://www.netwrix.com/active_directory_change_reporting_freeware.html
This product will show you changes in Active Directory, Exchange, and Group Policy. This is something I have been wanting to have for a while, but it was always too expensive. Now you can have it for free!

I emailed the company a few days ago to get pricing on the commercial products, but

Wednesday, November 24, 2010

Using AWS for larger business

Netflix is one of those really secretive companies, there have beens some interesting articles how they run the operations for the disc delivery, but not much on the way the deliver digital content. I came across this really cool article on how they use AWS:

http://www.readwriteweb.com/cloud/2010/11/why-netflix-switched-its-api-a.php

Pretty interesting read. Not sure I agree with some of the statements about less system admin and less database folks when they use AWS. I can understand less datacenter staff, but managing virtual or cloud infrastructure is just as much work. Obviously this is only the case when its running customized software and databases built internally (such as netflix). You still need to release software, manage the databases, and handle the same problems you would if you were doing it all in house.

The only items you don't need to worry about would be the following:

Backups
Provisioning new hardware (which is pretty simple if you run your own vmware in house)

This is another good read on a website I read. They moved to EC2, and wrote a review 1 year later:

http://4sysops.com/archives/4sysops-one-year-in-the-cloud-part-1-costs/

The costs seem to be higher than using a normal colo server, so I'm not sure what the ROI would be for companies moving to EC2 or AWS. It would be good to see some more detailed comparisons of how companies use the services, and what the ROI is.

Wednesday, November 3, 2010

PCI compliance and SSLv2

So I am doing a PCI audit, and one of the requirements is that there must not be weak cipher support enabled on systems which collect credit cards from the web. I started doing some testing around some of the larger ecommerce sites out there, and it had some pretty startling findings. SSLv3 has been in Browsers since 1996 (think mozilla 2.0... way before we had firefox).

http://blog.zenone.org/2009/03/pci-compliance-disable-sslv2-and-weak.html

From my testing these sites to have SSLv2 disabled: google, paypal, delta, etrade

These sites don’t have SSLv2 disabled, this is strictly against PCI: Home depot, bank of America, Scottrade, Microsoft, Amazon, QVC, Dell, Orbitz

Really concerning that these big commerce sites allow something like that to slip by the auditors. Time to hire me to fix your compliance :)

Saturday, October 16, 2010

Akamai user conference

Had a good time in Miami for a few days this week, got a lot of good content from the conference. I'm going to go over my notes some more next week, but there are some of the highlights from the show:

New offering
Fraud detection and scoring - Akamai does tokenization removing PCI scope, and they can build a profile on the end users and given them scores.

Cybersource - they are one of the processors of the akamai pci solution.

Edge encryption - Encrypts the data at the edge all the way to the database Only privledged systems can decrypt the data from the database. (might be useful, not sure)

Siteshield - ACL only allowing requests from specific akamai servers (protects against ddos)

ADS - Predictive analytics
Shows the proper ads based on what they are looking at across all sites
Look across 500+ shopping sites, on 160M users
Don't use pixels to allocate ads (slows down the site)

Akamai - Velocitude mobile reformatting on the fly - http://www.akamai.com/html/about/press/releases/2010/press_061010.html
Special tagging system which takes content and displays it for the proper device. Resized images and content done on the fly. Includes all data.

Dnssec
Verisign:
2010 - .NET
2011 - .COM

Akamai implement at end of 2010. Ga early 2011.
signed by Akamai. No need to manage it.
Need to look at godaddy. (they do some of our DNS along with Akamai)
Kpi. You make your own key and keep private key.
Look into internally. Microsoft dns support.
 
Ipv6
Need to start looking into routing ipv6. Check on firewalls (security support for ips and others) lb.
Geolocation for v6?
Look at ipv6 mtu issues
Idea Use Akamai edsgescape for geolocation versus what we do now.
Google support for ga and other tools? We use. Reporting infrastructure.
 
Q4.
Whitepaper
Roadmap
 
Q1 2011
Tech preview

Q1 2012
Limited availability
 
Akamai will nat to v4 for you.

Tuesday, October 5, 2010

HP Upgrades – QC,QTP,PC

Before I go into these 3 suites of tools, when is HP finally going to update BAC, RUM, or Diagnostics? It seemed like these tools have been really stagnate the last 3 years. I’m not going to HP Software Universe anymore, and I keep getting new account reps, so I have no idea what the roadmap is these days. That’s enough HP bashing, onto the good stuff.

We did the yearly HP upgrade over the last couple weeks, here is the rundown on the technologies and what was involved in each:

1. QTP – Very easy upgrade, did this on our terminal server and a desktop. The license server is now supported on 64-bit machines, so we moved the license server off an older 2003 box onto a 2008R2 system. No issues with the upgrade, and there seems to be a lot of improvements. Still waiting for feedback from our QA team on the improvements.

2. PC – Ended up building a couple new VMs for this, as we moved it onto Windows 2008R2 (64-bit) as well. There were no issues with the reinstall or moving our scripts and data over to the new systems. The tool itself didn’t have a lot of changes, but the fact that it runs on 64-bit is a good step to us getting rid of our 2003 systems.

3. QC – This one is the problem child. Initially we were going to be doing a larger rollout of QC10, so we built 2 VMs. One was for the DB and one for the app. The DB needed to be on an older OS and such, which was annoying. I ended up reinstalling it onto a Windows 2008 (32-bit) system along with moving to SQL 2008. They don’t yet support SQL 2008 R2 or Windows 2008 R2 from looking at the documentation. It was not all that clear. I have a case open with HP, as during the setup it doesn’t seem to want to connect to the database. I have checked the SQL Server TCP settings, and verified the login/password both locally and over the network. More on this one as HP helps me with the issues.

Sorry for the lack of updates, I should have a few posts coming up now.

Wednesday, September 1, 2010

Password management

Personally I am a big fan of proper password management procedures. For my personal data I always used the open source tool Keepass (http://keepass.info/) for my passwords, but it was always missing two items:

  1. Better browser integration (I know you can use the form filling plugins, but they aren't very well done or supported)
  2. Distribution (I know you can use dropbox or something else and it works fine)

For my encrypted data I always built hidden share volumes using another open source tool Truecrypt (http://www.truecrypt.org/). The product works great, but I find that I need to encrypt less and less of my data these days.

I just replaced them all with Lastpass (http://lastpass.com/), which is a very impressive product. It integrates with pretty much every major browser out there and it's all centralized and allows for web access for all of your data. It allows for import from pretty much any browser database, or product (such as keepass). Its $12 per year for premium, I buy products like this because the value is high and the cost is low. If we don't support companies like this then they are not around for us.

For our enterprise I have used several nice distributed products in the past, but one always stands out as a cheap and well built solution. The product we use is Password Manager Pro (http://www.manageengine.com/products/passwordmanagerpro/), we don't use the enterprise products from them which allow for centralized password reset and such. All systems, regardless of if they are Linux, or windows use active directory for authentication (thanks winbind).

The product is a great secure repository, and it allows us to share relevant passwords with finance, HR, Marketing, Development, or the Database teams. It allows for dynamic groupings which are very flexible based on the content of the resources defined.

Monday, August 23, 2010

OpenSolaris

At my previous company we were a heavy user of Solaris, and we also had a lot of legacy specific SCO systems as well. 3-4 years ago, some person (who shall remain nameless) was pushing Opensolaris as "the future", personally I thought the guy was way off base. He did deploy some of it, and it worked well, the problem would be the support and future for another player in the x86/64 market. There was no future, I saw it, but apparently other people in management didn't. I then read this article:

http://www.infoworld.com/d/open-source/requiem-os-opensolaris-board-closes-shop-961?source=rss_infoworld_news

I love being right J

Sorry to see you go, but it's for the better. Oracle hopefully will invest more resources into Linux which it hasn't been doing as much since the Sun purchase.

Saturday, August 21, 2010

Thoughts on the McAfee Intel purchase

I've been waiting for quite a while for a major security firm to be purchased by one of the big boys. I am glad that Intel was the first one to start this trend, because they are generally only a hardware player. If security were embedded at that level it would create a differentiator from other competitors, weather they are x64 based or other chips (Oracle, IBM). Security has become very commoditized and consolidated over the last several years.

You haven't seen much innovation in several years either. Is that because we've solved the problem? I think not… Is that because there isn't capital in this market? Nope… I think the main reason is due to the massive consolidation and the work needed to integrate all of these smaller companies together. You are also seeing players like Microsoft developing a larger security portfolio, as well as network vendors integrating more security features and products into their appliances. If you look back 5 years ago, there wasn't much as far at UTM (Unified Threat Management) devices, now every firewall vendor has one, you can find hundreds of products both commercial and open source in this area.

Some other thoughts on how security is being embedded across the stack are by Bruce Schneier, who is a superb writer and author as well as a great cryptographer.

http://www.schneier.com/blog/archives/2010/08/intel_buys_mcaf.html

http://www.schneier.com/essay-196.html

http://www.schneier.com/news-060.html

Thursday, August 5, 2010

ESXi 4.0 – 4.1 Planning

Writing this thanks to gogo wireless… Love this service.

I'm going to start upgrading our hosts to 4.1 next weekend probably. Going to try update manager even though it crashed and burned on my 3.5-4.0 upgrade and I ended up using the host utility. I know the command line upgrader works.

I read this on the spiceworks message board:

Resolution:

This is worth knowing as it's a bug and definate gotcha.

VMWare said:

"I had one of the escalation engineers for Update Manager look at the log and here is what he said

The customer imported the pre-upgrade offline bundle which is NOT needed and in fact causes problems.

[2010-07-28 13:36:55:781 'DownloadOfflinePatchTask.DownloadOfflinePatchTask{9}' 3700 INFO] [vciTaskBase, 530] Task started...

[2010-07-28 13:36:55:781 'DownloadOfflinePatchTask.DownloadOfflinePatchTask{9}' 3700 INFO] [downloadOfflinePatchTask, 123] Upload offline bundle: C:\Windows\TEMP\vum8205261630712700578.pre-upgrade-from-ESX4.0-to-4.1.0-0.0.260247-release.zip

We are working on correcting the situation so this doesn't happen. The existing pre-upgrade bundle will be replaced and we are working on a KB for those that have gotten into the situation (iKB 1024805). Unfortunately, there is no easy workaround once VUM is in this situation. A reinstall/DB reinit is suggested."

The end result is that VUM didn't want to reinstall cleanly so I had to nuke my Vcenter server and rebuild it from scratch.

So, DON'T APPLY THE PRE-UPGRADE PACKAGE IF YOU ARE USING UPDATE MANAGER TO UPGRADE YOUR ESX HOSTS!

:)


 

Maybe useful next weekend J

Monday, July 26, 2010

Vmware 4.1

Let me start off, by just saying that this site is great, its always entertaining and filled with great data : http://get-admin.com/blog/

I'm very happy that Vmware released 4.1 recently. After the horror stories I read about 4.0U1 we decided to skip them and wait for 4.1. There wasn't a lot in the updates we cared for anyways. There are quite a few interesting features in 4.1, and one of the good things that Vmware has done is finally killed off ESX (after this release). ESXi has been great for us over the last couple years, and I haven't had any complaints with switching over to it from ESX in previous years. I found using update manager was not the most reliable when moving from 3.x to 4.x on the physical systems themselves, so we opted to use the host upgrade tool. This is not a supported method to move from 4.x to 4.1. We will probably have to give the update manager another run, which concerns me. At least its not as complex as upgrading Hyper-v J

I will probably start upgrading our enterprise (not production) systems to 4.1 in the next couple weeks, and I will post my findings on the blog as I go.

Tuesday, July 6, 2010

Thoughts on IBM BigFix Purchase

Bigfix makes some excellent products, and they have been moving in great directions over the last couple years. They have moved out of pure remediation and into configuration management and control. I would have loved to have purchased them for use at my current company, but the pricing was a bit higher than I'm using for Shavlik HFNetchk Protect, which is another good product, but is far more limited. I wanted to have one tool to patch Linux and Windows systems.

IBM has been really struggling to provide a good provisioning and patch management tool for years and years. First they were pushing TPM which is probably the worst product I have seen IBM release. Unfortunately a company I worked for previously was obsessed with using this product that most Tivoli enterprise customers get for free and completely disregard. I spent a good amount of time looking at the product and its capabilities, or lack thereof. I'm concluding my rant now, but its happy to see IBM adding a superb replacement for TPM and adding additional security related products they will acquire with the Bigfix purchase.

I was also quite surprised at the cost of the purchase at $400m. I know Bigfix has a lot of customers, and they sell a service, which makes it nice for both operating business as well as the customers who can bill this against opex versus capex. I would have assumed they would have had to pay more for the company. It will be interesting to see what features IBM takes from them and puts into Tivoli, and which other ones become part of the ISS portfolio over time.

Monday, July 5, 2010

Simplify and Automate

Now that the workload has reduced a bit over the last month or so we can spend time doing project work. It's always been my philosophy to simplify as much as possible, this is normally because I end up having to fix messes, which are normally caused by undue complexity. Complexity can affect performance, availability, and manageability. Automation can often create complexity, as can requests by various people in the business who don't necessarily plan the projects or requests they make of others (especially development and operations).

This being said I often get blocked when I try to simplify things, because people want to build things out in a more redundant manner than is required for the business needs. There are a lot of ways to create a redundant system without creating complexity, you just have to step back and look at the overall configuration and requirements to come up with the best solution.

We get a lot of requests from our QA team to reload various Resin app servers, and other processes. What we are doing now is creating a web based interface for them to do the reloads on their own. This eliminates the need for operations to run the scripts, and saves time and resources.

Saturday, May 22, 2010

JMS, Endeca

We are building a services tier which will be based on ActiveMQ JMS, and our standard Resin webserver/app servers. We are building this with 2 nodes sharing each on the nodes. They will use a shared file system. We have a few services to start with, which are internal only at first. Should be interesting.

We are also working on some new products, and we are pretty close to selecting Endeca search for the indexing engine and SEO engine for it. More on that as we get along with development and implementation. The product looks pretty cool, so it should be fun.

Friday, April 30, 2010

Upgrade Land for Microsoft - Sharepoint / Exchange 2010, and JIRA 4.1

    I have been using Office 2010 for a while, and moved from the preview, to beta, and now I am finally on the release. We decided corporately to stick to 32 bit even when we are on 64 bit windows 7 on our newer systems. The main reason for staying on 32 bit was the all of the addons in the market are written for 32 bit only. When I was testing on 64 bit, I wished I had just stuck to 32. The released version has been stable for the last few days, but I didn't have much issues with the beta release either.

    Now that we have office underway we are beginning upgrades to the other 2010 products we use from Microsoft. The first one was Sharepoint, which we are MOSS 2007 right now. The migration was slightly painful, and here are some of the pointers that I found helpful in the migration.

    The next step is a bunch of testing, and hopefully cutting over next weekend to the new version (5/8/10). We avoided any custom components on our sharepoint, which made the migration much simpler. We have yet to have any complaints with the migrated test data. The new interface is awesome, and works great in Chrome as well. Great job to the Microsoft Team on this product!

    We are in process of an exchange 2010 upgrade as well, we are building out some new VMs and we will migrate the mailboxes over. The work is still initial on that project, so I will post more on that as we go. My colleague is the main lead on that project.

    On another side note, I moved us from JIRA 4.0 to JIRA 4.1. The upgrade was somewhat manual and required some work and planning. The new JIRA interface is very nice, and its good to see them finally changing the old reliable interface they have had for many years. Now if they would only fix the UI for the admin section so I could stop scrolling on a huge list that would be great!


 

Tridion Upgrade 2009 SP1

We have decided after some pain to give Tridion another go over here. We have some really sharp guys helping us from the firm, and they have helped us immensely. We just upgraded to the newest version, and after the struggle to get it running initially it's gone very smoothly and simply. Within a couple hours we moved everything over to the new version and its working flawlessly. It was very simple and good to see the quality of the installers. They handled pretty much everything without any additional manual steps. We are looking forward to moving to the new version later this summer as we beta test for them.

    Lots of the issues with the product were due to the implementation that was designed for us. We will be redoing our site and building it properly using the new version. I think with proper guidance and a good technical team we will not have the issues from the past. We are also moving a lot of our custom code from the current codebase into a web services layer that will isolate the our code from the main Tridion content. I am looking forward to the project.

    There are lots of other things going on today, new database server swap for our performance testing, and a bunch of other project work. Its good that its quiet in the office as far as non-project work goes.

Monday, April 19, 2010

Week in Geneva

Just wrapping up a week of pretty intense work here in our datacenter, here is a list of some of the fun projects we accomplished.

  1. Disk upgrades to netapp
    1. Netapp locally here in Switzerland went out of their way to fix issues caused by my purchase in the US. Last time I buy in US and ship overseas.
    2. Netapp also looked over the system and made some very good corrections and suggestions, much appreciated the great customer support.
  2. Reconfigured network
    1. Move 10GE to other subnets
    2. Change netapp network config
    3. Several other additional cables and infrastructure was built out
  3. Firewall Upgrades
  4. F5 Upgrades from OSv9.4.3 to OSv10.1
  5. Install 3 New VM Servers
  6. Install memory in systems (DB, VM)
  7. Cleanup of office, and build other infrastructure
  8. Major failover testing of netapp, firewall, and loadbalancers

Now we are trying to get home with the volcanic ash situation in Europe. It looks like we will be driving our rental car to Barcelona, and taking a flight from there. Should be an interesting little side trip.

More fun later, glad to have a little break after working crazy hours the last week. J

Tuesday, April 13, 2010

Finally a way to block those pesky bots stealing content

We've been using a product over at MFG which is sort of like an invisible captcha tool. The beauty of the product is the end user doesn't even know its running, but the accuracy and technology which is used is very unique and cutting edge. We first started speaking with Pramana – www.pramana.com about over a year ago, initially there was issues with the technology, but it had progressed quickly and become rock solid. I was unable to get false positives in all my testing and scripting.

We implemented the technology (Pramana HumanPresent - www.pramana.com/human-present/) based on issues with competitors which sell databases and information about manufacturing companies essentially stealing our content. They use various methods, including screen scraping, and seo scraping bots. This has been observed in many occasions, and we even had one company who wanted to sell out to us, while they were stealing our data! (somewhat legally)

The product is not super simple to implement, but the benefits are great. They have SDKs for a bunch of languages (for us we use Java, which is more complex than the PHP API or others they have). The SDKs give you all kinds of granular control.

We are a paying customer of Pramana, and they got the great idea of letting users use the service for free (Called BotAlert - http://www.pramana.com/botalert/) in order to detect and measure the bots (you get pretty daily reports from them), if you want to block the bots then you have to pay. The cost is very reasonable considering it doesn't inconvenience users, and it can also allow search engine crawlers to index content, but homebuilt screen scrapers to be blocked.

Thursday, March 25, 2010

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging):

http://devcentral.f5.com/Default.aspx?tabid=53&view=topic&postid=1171255 (registration is free)

when HTTP_REQUEST {

# Check if there is a JSESSIONID cookie

if {[HTTP::cookie "JSESSIONID"] ne ""}{

# Persist off of the cookie value with a timeout of 2 hours (7200 seconds)

persist uie [string tolower [HTTP::cookie "JSESSIONID"]] 7200

# Log that we're using the cookie value for persistence and the persistence key if it exists.

log local0. "[IP::client_addr]:[TCP::client_port]: Request to [HTTP::uri] on server [LB::server] with cookie: [HTTP::cookie value JSESSIONID]"

} else {

# Parse the jsessionid from the path

set jsess [findstr [string tolower [HTTP::path]] "jsessionid=" 11]

# Use the jsessionid from the path for persisting with a timeout of 2 hours (7200 seconds)

if { $jsess != "" } {

persist uie $jsess 7200

# Log that we're using the path jessionid for persistence and the persistence key if it exists.

log local0. "[IP::client_addr]:[TCP::client_port]: Request to [HTTP::uri] on server [LB::server] used persistence record from path: [persist lookup uie $jsess]"

}

}

}

when HTTP_RESPONSE {

# Check if there is a jsessionid cookie in the response

if {[HTTP::cookie "JSESSIONID"] ne ""} {

# Persist off of the cookie value with a timeout of 2 hours (7200 seconds)

persist add uie [string tolower [HTTP::cookie "JSESSIONID"]] 7200

            # Log Response

log local0. "[IP::client_addr]:[TCP::client_port]: Request to server [LB::server] with cookie: [HTTP::cookie value JSESSIONID]. Added persistence record from cookie: [persist lookup uie [string tolower [HTTP::cookie "JSESSIONID"]]]"

}

}

when LB_SELECTED {

log "From [IP::client_addr] to physical server [LB::server] the cookie JSESSIONID is [HTTP::cookie "JSESSIONID"] URI JESSIONID is [findstr [string tolower [HTTP::path]] "jsessionid=" 11] "

}


 

We've replicated and done 3 rounds of packet captures, and you can always see the issue in the logging from the irule above:

Mar 21 01:14:25 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63231: Request to /images/mfg/icons/search_cross.png on server -http-pool x.x.x.19 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:14:25 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63229: Request to /images/mfg/icons/icon_largemessages.png on server -http-pool BACKENDSUBNET.19 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:14:25 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63228: Request to /images/mfg/icons/icon_clock.png on server -http-pool BACKENDSUBNET.19 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:14:25 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63231: Request to /images/mfg/icons/icon_largequotes.png on server -http-pool BACKENDSUBNET.19 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:14:25 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63233: Request to /images/mfg/icons/icon_largendas.png on server -http-pool BACKENDSUBNET.19 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:14:25 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63230: Request to /images/mfg/icons/icon_largebluestar.png on server -http-pool BACKENDSUBNET.19 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:14:25 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63232: Request to /mfg/scripts/search/search.js on server -http-pool BACKENDSUBNET.19 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:14:25 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63235: Request to /favicon.ico on server -http-pool 0 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:14:25 tmm tmm[1629]: 01220002:6: Rule JSESSION_iRule_withlogging <LB_SELECTED>: From CLIENTIP to physical server -http-pool BACKENDSUBNET.19 80 the cookie JSESSIONID is abcND0QYKjeOCczB8c_Ds URI JESSIONID is

Mar 21 01:14:25 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63236: Request to /servlet/mfg.Controller?time=1269130074065&pmId=1001&act=1154 on server -http-pool 0 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:14:25 tmm tmm[1629]: 01220002:6: Rule JSESSION_iRule_withlogging <LB_SELECTED>: From CLIENTIP to physical server -http-pool BACKENDSUBNET.19 80 the cookie JSESSIONID is abcND0QYKjeOCczB8c_Ds URI JESSIONID is

Mar 21 01:14:25 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63224: Request to /mfg/contactHome.jsp?time=1269130079475&pmId=1154 on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:31 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /servlet/mfg.Controller?time=1269130079686&pmId=1154&act=supplierDisplayAgent&aid=904564&dgrdv=1 on server -http-pool 0 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:31 tmm tmm[1629]: 01220002:6: Rule JSESSION_iRule_withlogging <LB_SELECTED>: From CLIENTIP to physical server -http-pool BACKENDSUBNET.20 80 the cookie JSESSIONID is abcND0QYKjeOCczB8c_Ds URI JESSIONID is

Mar 21 01:15:31 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /mfg/contactHome.jsp?time=1269130145070&pmId=1016 on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:43 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /images/mfg/modalbox/close.gif on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:44 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /servlet/mfg.Controller?time=1269130145287&pmId=1016&act=modal&mtId=800&mLoad=true&aid=904564 on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:44 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63250: Request to /images/bo/design/spinner.gif on server -http-pool 0 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:44 tmm tmm[1629]: 01220002:6: Rule JSESSION_iRule_withlogging <LB_SELECTED>: From CLIENTIP to physical server -http-pool BACKENDSUBNET.20 80 the cookie JSESSIONID is abcND0QYKjeOCczB8c_Ds URI JESSIONID is

Mar 21 01:15:44 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /images/mfg/icons/search_cross.png on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:44 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63250: Request to /images/mfg/icons/doubleDownArrow.png on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:45 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /images/mfg/combo/comboover.gif on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:45 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63250: Request to /images/mfg/combo/combopress.gif on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:46 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /servlet/mfg.ajaxProvider.GetDisciplineProvider;jsessionid=abcND0QYKjeOCczB8c_Ds?time=1269130158059&pmId=1016 on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:46 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63250: Request to /servlet/mfg.ajaxProvider.GetRfqBuyerLocationProvider?time=1269130158059&pmId=1016&sImg=false&sCwor=false on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:46 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /images/mfg/icons/dhtmlTree/iconUnCheckAll.gif on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:46 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /servlet/mfg.ajaxProvider.GetMaterialProvider?time=1269130158060&pmId=1016&sImg=false on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:46 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63250: Request to /images/mfg/combo/combonormal.gif on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:46 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /servlet/mfg.ajaxProvider.GetIndustryProvider?time=1269130158060&pmId=1016&ids= on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:46 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63250: Request to /servlet/mfg.ajaxProvider.GetLanguageProvider?time=1269130158060&pmId=1016&ids= on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds

Mar 21 01:15:47 tmm tmm[1629]: Rule JSESSION_iRule_withlogging <HTTP_REQUEST>: CLIENTIP:63247: Request to /images/mfg/icons/dhtmlTree/folderOpen.gif on server -http-pool BACKENDSUBNET.20 80 with cookie: abcND0QYKjeOCczB8c_Ds


 

The support team cannot figure out why this is, and it's been going on for very long. I will keep updating this as it goes on. The latest saga is that they are blaming it on the SNAT we are using.

http://devcentral.f5.com/Default.aspx?tabid=53&view=topic&postid=813179

http://devcentral.f5.com/Default.aspx?tabid=53&forumid=5&tpage=1&view=topic&postid=86374

F5 can be implemented as the inline gateway or outside of the gateway as a NAT device. We opted for the latter in order to avoid sending unnecessary traffic through the device. We may have to do some testing with it as the gateway and see if it fixes it. It clearly looks like a bug to me. We are running BIG-IP 9.4.5 Build 1049.10 Final, and we're planning on moving to V10 soon.

Wednesday, March 10, 2010

The Battle of the CMS

My company was paying a lot for an expensive CMS system which wasn't working properly, so I had mentioned we should look at Joomlah and Drupal, as some of the popular systems out on the internet. Of course IT wasn't as involved as we should have been, and marketing is essentially forcing us to use Drupal. Then we starting giving some of our requirements and some of the lack of integrated core functionality is pretty disappointing for a proper CMS. I still have yet to see a full requirements list, but I do have a list of half a dozen or so on the operations side surrounding deployment, rollback, and environment management. I hope someone puts together a proper requirements list so we know where the technology will work well and where it will fail.

I

Moving to the Cloud and Packing up DR

In other news on a side consulting gig I am doing we moved the company from exchange to Google apps. It's been a bit painful, but it will be more efficient in terms of cost and support. With DR being very important to the firm, this is a perfect fit, especially with the Postini archiving solutions. With such a small firm it made a lot of sense, and should prove to be a perfect fit. Also we are re-architecting the overall infrastructure from a dual location (DR) setup with clustering to a single location. In the process we are moving from Windows Server 2008 to Windows Server 2008 R2. I haven't done a lot of Hyper-v, but I have done a lot of VM work, Windows, and iSCSI. This should prove to be an interesting project both on technology and moving to cloud based resources, as well as the future direction of the company.

Expect more soon!

Wednesday, February 10, 2010

JMS queues and Spring problems

We're looking at implementing JMS, which is something we sorely need in order to break our monolithic codebase into small portable non-interdependent (where it makes sense) modules. I leave the code and software architecture to the experts on the Development team, but of course my team has to deal with supporting whatever is designed and implemented, as well as monitoring and managing the services and associated technologies.

Of the products out there, we are leaning towards the open source queues due to cost and the fact that JSM has been around long enough and is a reliable and commonly used technology. One of the major ones we are looking at is ActiveMQ, but we're also looking at Sun, and other alternatives. Any suggestions would be awesome!

http://en.wikipedia.org/wiki/Java_Message_Service

Sorry for the lack of posts recently, been busy with work and doing all that not so fun internal administrative stuff. I hope to be posting more regularly now that we have a lot of new interesting projects going. Had a great time in New Orleans last weekend, nice job Saints!

Open source config management and AV

I have a new engineer on my team this week, and a couple of the first projects he's working on are open source config mgmt. (for basic config files) as well as open source AV.

On the config management front, we're leaning towards Puppet. I have a good friend who uses cfengine on a big server farm, and he loves it. From what I've read Puppet seems to be a newer more modern version, and we don't have a huge farm to manage so I think it would work perfectly for us. Looking forward to learning and implementing it!

http://en.wikipedia.org/wiki/Comparison_of_open_source_configuration_management_software


 

With AV we are runnin some tests with Clamwin, and we'll see how well it can pickup incoming viruses uploaded via our Resin application servers. We shall see.

Next project will be bringing a CentOS yum repo locally and patching over the LAN versus the internet, another project I've wanted to get done, but haven't yet completed.

Thursday, January 14, 2010

Cloud usage

Could, cloud cloud, the cloud is HERE! I love articles like this http://alan.blog-city.com/has_amazon_ec2_become_over_subscribed.htm# Really interesting read, and makes a lot of sense. The problem is that you don't have visibility into what they are running and how well its running. Running my business on a completely unknown infrastructure would be a no no. I understand the internet is a big shared infrastructure, but wouldn't you rather know the specifics about the performance and capacity of your infrastructure.

This is an interesting thing to think about before you move apps all over the cloud, and have to manage a mess. I do like some of the monitoring which is being done by products like Zenoss : http://mediasrc.zenoss.com/documents/Using+Zenoss+to+Manage+Cloud.pdf . I like Zenoss a lot, but haven't done a full deployment of it yet. I might have the opportunity to do that soon!

Wednesday, January 6, 2010

Storage upgrades - Netapp

We're doing some storage upgrades, basically I need to build out a load testing environment, and since our application is pretty much database trapped I need to invest a bunch of cash in more big iron and more spindles. I need quad – 6 core db servers, which is what we have in production right now. I also want to get 24 spindles of disk, which is what we have in production as well for the database. The disks will mostly be used for the loadtesting environment, but they will also be used for some of the other projects.

It's too bad our netapp FAS 3040s have all the slots full (10g, PAM, then the rest FC cards). I need to buy FC disks and DS14 shelves versus buying the newer SAS disks which I would rather have, pretty annoying. Anyways I am buying 28 disks, and its costing me around $54k for the disks. So if you add up the disks that's 8.4TB raw (but the spindle count is what really matters), which means I am at about $6.30/G of data, which is pretty poor. That doesn't even include the support costs for the new hardware. Retail price of these disks is $280, lets tack on another 25% so they are $350 per disk, or around $1.10/G. The cost of the shelfs is costing me roughly 5x the price for the same disks. Can you see why I am pissed? I can understand 100% markup, but 500%... really? Who says these storage companies aren't like the lawyers J