Skip to main content

Posts

Showing posts from 2010

Dealing with Information Overload

I follow a lot of different sites.  I tend to read my news every other day.  I'm always trying the best tools to deal with my data on multiple devices (iphone, ipad) and multiple computers (google crome OS, Windows).  I have found the best combination to be the following: 1.   Master feeds on google reader (works great on any device) 2.   Feeddemon (Windows) FeedDemon is a great app, it will sync with my google feeds and give me the same view of what I have read and what I have not.  You have a better user interface for getting through all that data.  I also customize the views to work for the way that I want to fly through all my news.  These kinds of user interface tweaks aren't possible with google reader yet, but it will improve.  Google reader is a great app and works great on mobile or desktop browsers. I always use google chrome as my primary browser, it wins the speed race hands down, and supports the critical extensions I need.  I have moved of Firefox as my pri

Google Chrome Netbook

I came home today to find a strange looking box with odd shapes on the outside of it. Once I opened it I didn’t expect to find a small netbook there shipped from google. This is the Chrome OS Netbook they shipped to me for free. It’s a nice form factor, not too large and not too small. Once the OS booted up, you just hook it up to wifi, login with your google account, snap a profile picture, and you are off. After some time it upgraded itself, similar to chrome. It required a reboot after the update. It runs very well. It does a good job with flash and other web media. It took quite a while to get my full extension sync from the PC, but most of my extensions worked without a hitch. The major one which doesn’t work is lastpass, which I really need! I tried many sites on it, and everything looks like it works well. I also installed several “apps” from the store which make for easy access to my google products and other sites I use a lot. It will make a good ipad type to

Great freeware system admin tools

Thanks to the awesome community and software over at spiceworks - www.spiceworks.com I found these great free tools from Netwrix - www.netwrix.com that are superb for any system admin dealing with windows systems. I could have used this fileserver monitor freeware in the past for basic audits. I wish I had known about it sooner. The other really useful tools we are using are the following freeware tools. Essentially the free versions just email you at 3am every night with changes or reports. You can change the schedule using the scheduled task control panel: Password Expiration Notifier - http://www.netwrix.com/password_expiration_notifier_freeware.html This tool emails the end users when their passwords are expiring, its good for us because we have some remote users and mac users who do not get notified. This should prevent the lockouts we see when passwords expire. AD Change Reporter - http://www.netwrix.com/active_directory_change_reporting_freeware.html This p

Using AWS for larger business

Netflix is one of those really secretive companies, there have beens some interesting articles how they run the operations for the disc delivery, but not much on the way the deliver digital content. I came across this really cool article on how they use AWS: http://www.readwriteweb.com/cloud/2010/11/why-netflix-switched-its-api-a.php Pretty interesting read. Not sure I agree with some of the statements about less system admin and less database folks when they use AWS. I can understand less datacenter staff, but managing virtual or cloud infrastructure is just as much work. Obviously this is only the case when its running customized software and databases built internally (such as netflix). You still need to release software, manage the databases, and handle the same problems you would if you were doing it all in house. The only items you don't need to worry about would be the following: Backups Provisioning new hardware (which is pretty simple if you run your own vm

PCI compliance and SSLv2

So I am doing a PCI audit, and one of the requirements is that there must not be weak cipher support enabled on systems which collect credit cards from the web. I started doing some testing around some of the larger ecommerce sites out there, and it had some pretty startling findings. SSLv3 has been in Browsers since 1996 (think mozilla 2.0... way before we had firefox). http://blog.zenone.org/2009/03/pci-compliance-disable-sslv2-and-weak.html From my testing these sites to have SSLv2 disabled: google, paypal, delta, etrade These sites don’t have SSLv2 disabled, this is strictly against PCI: Home depot, bank of America, Scottrade, Microsoft, Amazon, QVC, Dell, Orbitz Really concerning that these big commerce sites allow something like that to slip by the auditors. Time to hire me to fix your compliance :)

Akamai user conference

Had a good time in Miami for a few days this week, got a lot of good content from the conference. I'm going to go over my notes some more next week, but there are some of the highlights from the show: New offering Fraud detection and scoring - Akamai does tokenization removing PCI scope, and they can build a profile on the end users and given them scores. Cybersource - they are one of the processors of the akamai pci solution. Edge encryption - Encrypts the data at the edge all the way to the database Only privledged systems can decrypt the data from the database. (might be useful, not sure) Siteshield - ACL only allowing requests from specific akamai servers (protects against ddos) ADS - Predictive analytics Shows the proper ads based on what they are looking at across all sites Look across 500+ shopping sites, on 160M users Don't use pixels to allocate ads (slows down the site) Akamai - Velocitude mobile reformatting on the fly - http://www.akamai.co

HP Upgrades – QC,QTP,PC

Before I go into these 3 suites of tools, when is HP finally going to update BAC, RUM, or Diagnostics? It seemed like these tools have been really stagnate the last 3 years. I’m not going to HP Software Universe anymore, and I keep getting new account reps, so I have no idea what the roadmap is these days. That’s enough HP bashing, onto the good stuff. We did the yearly HP upgrade over the last couple weeks, here is the rundown on the technologies and what was involved in each: 1. QTP – Very easy upgrade, did this on our terminal server and a desktop. The license server is now supported on 64-bit machines, so we moved the license server off an older 2003 box onto a 2008R2 system. No issues with the upgrade, and there seems to be a lot of improvements. Still waiting for feedback from our QA team on the improvements. 2. PC – Ended up building a couple new VMs for this, as we moved it onto Windows 2008R2 (64-bit) as well. There were no issues with the reinstall or moving our

Password management

Personally I am a big fan of proper password management procedures. For my personal data I always used the open source tool Keepass ( http://keepass.info/ ) for my passwords, but it was always missing two items: Better browser integration (I know you can use the form filling plugins, but they aren't very well done or supported) Distribution (I know you can use dropbox or something else and it works fine) For my encrypted data I always built hidden share volumes using another open source tool Truecrypt ( http://www.truecrypt.org/ ). The product works great, but I find that I need to encrypt less and less of my data these days. I just replaced them all with Lastpass ( http://lastpass.com/ ), which is a very impressive product. It integrates with pretty much every major browser out there and it's all centralized and allows for web access for all of your data. It allows for import from pretty much any browser database, or product (such as keepass). Its $12 per year for premiu

OpenSolaris

At my previous company we were a heavy user of Solaris, and we also had a lot of legacy specific SCO systems as well. 3-4 years ago, some person (who shall remain nameless) was pushing Opensolaris as "the future", personally I thought the guy was way off base. He did deploy some of it, and it worked well, the problem would be the support and future for another player in the x86/64 market. There was no future, I saw it, but apparently other people in management didn't. I then read this article: http://www.infoworld.com/d/open-source/requiem-os-opensolaris-board-closes-shop-961?source=rss_infoworld_news I love being right J Sorry to see you go, but it's for the better. Oracle hopefully will invest more resources into Linux which it hasn't been doing as much since the Sun purchase.

Thoughts on the McAfee Intel purchase

I've been waiting for quite a while for a major security firm to be purchased by one of the big boys. I am glad that Intel was the first one to start this trend, because they are generally only a hardware player. If security were embedded at that level it would create a differentiator from other competitors, weather they are x64 based or other chips (Oracle, IBM). Security has become very commoditized and consolidated over the last several years. You haven't seen much innovation in several years either. Is that because we've solved the problem? I think not… Is that because there isn't capital in this market? Nope… I think the main reason is due to the massive consolidation and the work needed to integrate all of these smaller companies together. You are also seeing players like Microsoft developing a larger security portfolio, as well as network vendors integrating more security features and products into their appliances. If you look back 5 years ago, there

ESXi 4.0 – 4.1 Planning

Writing this thanks to gogo wireless… Love this service. I'm going to start upgrading our hosts to 4.1 next weekend probably. Going to try update manager even though it crashed and burned on my 3.5-4.0 upgrade and I ended up using the host utility. I know the command line upgrader works. I read this on the spiceworks message board: Resolution: This is worth knowing as it's a bug and definate gotcha. VMWare said: "I had one of the escalation engineers for Update Manager look at the log and here is what he said The customer imported the pre-upgrade offline bundle which is NOT needed and in fact causes problems. [2010-07-28 13:36:55:781 'DownloadOfflinePatchTask.DownloadOfflinePatchTask{9}' 3700 INFO] [vciTaskBase, 530] Task started... [2010-07-28 13:36:55:781 'DownloadOfflinePatchTask.DownloadOfflinePatchTask{9}' 3700 INFO] [downloadOfflinePatchTask, 123] Upload offline bundle: C:\Windows\TEMP\vum8205261630712700578.pre-upgrade-from-ESX4.0-to-4.1.0-0.0.260

Vmware 4.1

Let me start off, by just saying that this site is great, its always entertaining and filled with great data : http://get-admin.com/blog/ I'm very happy that Vmware released 4.1 recently. After the horror stories I read about 4.0U1 we decided to skip them and wait for 4.1. There wasn't a lot in the updates we cared for anyways. There are quite a few interesting features in 4.1, and one of the good things that Vmware has done is finally killed off ESX (after this release). ESXi has been great for us over the last couple years, and I haven't had any complaints with switching over to it from ESX in previous years. I found using update manager was not the most reliable when moving from 3.x to 4.x on the physical systems themselves, so we opted to use the host upgrade tool. This is not a supported method to move from 4.x to 4.1. We will probably have to give the update manager another run, which concerns me. At least its not as complex as upgrading Hyper-v J I wi

Thoughts on IBM BigFix Purchase

Bigfix makes some excellent products, and they have been moving in great directions over the last couple years. They have moved out of pure remediation and into configuration management and control. I would have loved to have purchased them for use at my current company, but the pricing was a bit higher than I'm using for Shavlik HFNetchk Protect, which is another good product, but is far more limited. I wanted to have one tool to patch Linux and Windows systems. IBM has been really struggling to provide a good provisioning and patch management tool for years and years. First they were pushing TPM which is probably the worst product I have seen IBM release. Unfortunately a company I worked for previously was obsessed with using this product that most Tivoli enterprise customers get for free and completely disregard. I spent a good amount of time looking at the product and its capabilities, or lack thereof. I'm concluding my rant now, but its happy to see IBM adding a su

Simplify and Automate

Now that the workload has reduced a bit over the last month or so we can spend time doing project work. It's always been my philosophy to simplify as much as possible, this is normally because I end up having to fix messes, which are normally caused by undue complexity. Complexity can affect performance, availability, and manageability. Automation can often create complexity, as can requests by various people in the business who don't necessarily plan the projects or requests they make of others (especially development and operations). This being said I often get blocked when I try to simplify things, because people want to build things out in a more redundant manner than is required for the business needs. There are a lot of ways to create a redundant system without creating complexity, you just have to step back and look at the overall configuration and requirements to come up with the best solution. We get a lot of requests from our QA team to reload various Resin app s

JMS, Endeca

We are building a services tier which will be based on ActiveMQ JMS, and our standard Resin webserver/app servers. We are building this with 2 nodes sharing each on the nodes. They will use a shared file system. We have a few services to start with, which are internal only at first. Should be interesting. We are also working on some new products, and we are pretty close to selecting Endeca search for the indexing engine and SEO engine for it. More on that as we get along with development and implementation. The product looks pretty cool, so it should be fun.

Upgrade Land for Microsoft - Sharepoint / Exchange 2010, and JIRA 4.1

    I have been using Office 2010 for a while, and moved from the preview, to beta, and now I am finally on the release. We decided corporately to stick to 32 bit even when we are on 64 bit windows 7 on our newer systems. The main reason for staying on 32 bit was the all of the addons in the market are written for 32 bit only. When I was testing on 64 bit, I wished I had just stuck to 32. The released version has been stable for the last few days, but I didn't have much issues with the beta release either.     Now that we have office underway we are beginning upgrades to the other 2010 products we use from Microsoft. The first one was Sharepoint, which we are MOSS 2007 right now. The migration was slightly painful, and here are some of the pointers that I found helpful in the migration. Run Powershell as admin http://manish-sharepoint.blogspot.com/2009/11/error-while-working-with-powershell.html Batch Upgrade Visual Styles (the new look and feel for sharepoint) http://

Tridion Upgrade 2009 SP1

We have decided after some pain to give Tridion another go over here. We have some really sharp guys helping us from the firm, and they have helped us immensely. We just upgraded to the newest version, and after the struggle to get it running initially it's gone very smoothly and simply. Within a couple hours we moved everything over to the new version and its working flawlessly. It was very simple and good to see the quality of the installers. They handled pretty much everything without any additional manual steps. We are looking forward to moving to the new version later this summer as we beta test for them.     Lots of the issues with the product were due to the implementation that was designed for us. We will be redoing our site and building it properly using the new version. I think with proper guidance and a good technical team we will not have the issues from the past. We are also moving a lot of our custom code from the current codebase into a web services layer t

Week in Geneva

Just wrapping up a week of pretty intense work here in our datacenter, here is a list of some of the fun projects we accomplished. Disk upgrades to netapp Netapp locally here in Switzerland went out of their way to fix issues caused by my purchase in the US. Last time I buy in US and ship overseas. Netapp also looked over the system and made some very good corrections and suggestions, much appreciated the great customer support. Reconfigured network Move 10GE to other subnets Change netapp network config Several other additional cables and infrastructure was built out Firewall Upgrades F5 Upgrades from OSv9.4.3 to OSv10.1 Install 3 New VM Servers Install memory in systems (DB, VM) Cleanup of office, and build other infrastructure Major failover testing of netapp, firewall, and loadbalancers Now we are trying to get home with the volcanic ash situation in Europe. It looks like we will be driving our rental car to Barcelona, and taking a flight from there. Should be an interesting lit

Finally a way to block those pesky bots stealing content

We've been using a product over at MFG which is sort of like an invisible captcha tool. The beauty of the product is the end user doesn't even know its running, but the accuracy and technology which is used is very unique and cutting edge. We first started speaking with Pramana – www.pramana.com about over a year ago, initially there was issues with the technology, but it had progressed quickly and become rock solid. I was unable to get false positives in all my testing and scripting. We implemented the technology (Pramana HumanPresent - www.pramana.com/human-present/ ) based on issues with competitors which sell databases and information about manufacturing companies essentially stealing our content. They use various methods, including screen scraping, and seo scraping bots. This has been observed in many occasions, and we even had one company who wanted to sell out to us, while they were stealing our data! (somewhat legally) The product is not super simple to implement

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging): http://devcentral.f5.com/Default.aspx?tabid=53&view=topic&postid=1171255 (registration is free) when HTTP_REQUEST { # Check if there is a JSESSIONID cookie if {[HTTP::cookie "JSESSIONID"] ne ""}{ # Persist off of the cookie value with a timeout of 2 hours (7200 seconds) persi

The Battle of the CMS

My company was paying a lot for an expensive CMS system which wasn't working properly, so I had mentioned we should look at Joomlah and Drupal, as some of the popular systems out on the internet. Of course IT wasn't as involved as we should have been, and marketing is essentially forcing us to use Drupal. Then we starting giving some of our requirements and some of the lack of integrated core functionality is pretty disappointing for a proper CMS. I still have yet to see a full requirements list, but I do have a list of half a dozen or so on the operations side surrounding deployment, rollback, and environment management. I hope someone puts together a proper requirements list so we know where the technology will work well and where it will fail. I

Moving to the Cloud and Packing up DR

In other news on a side consulting gig I am doing we moved the company from exchange to Google apps. It's been a bit painful, but it will be more efficient in terms of cost and support. With DR being very important to the firm, this is a perfect fit, especially with the Postini archiving solutions. With such a small firm it made a lot of sense, and should prove to be a perfect fit. Also we are re-architecting the overall infrastructure from a dual location (DR) setup with clustering to a single location. In the process we are moving from Windows Server 2008 to Windows Server 2008 R2. I haven't done a lot of Hyper-v, but I have done a lot of VM work, Windows, and iSCSI. This should prove to be an interesting project both on technology and moving to cloud based resources, as well as the future direction of the company. Expect more soon!

JMS queues and Spring problems

We're looking at implementing JMS, which is something we sorely need in order to break our monolithic codebase into small portable non-interdependent (where it makes sense) modules. I leave the code and software architecture to the experts on the Development team, but of course my team has to deal with supporting whatever is designed and implemented, as well as monitoring and managing the services and associated technologies. Of the products out there, we are leaning towards the open source queues due to cost and the fact that JSM has been around long enough and is a reliable and commonly used technology. One of the major ones we are looking at is ActiveMQ, but we're also looking at Sun, and other alternatives. Any suggestions would be awesome! http://en.wikipedia.org/wiki/Java_Message_Service Sorry for the lack of posts recently, been busy with work and doing all that not so fun internal administrative stuff. I hope to be posting more regularly now that we have a lot

Open source config management and AV

I have a new engineer on my team this week, and a couple of the first projects he's working on are open source config mgmt. (for basic config files) as well as open source AV. On the config management front, we're leaning towards Puppet. I have a good friend who uses cfengine on a big server farm, and he loves it. From what I've read Puppet seems to be a newer more modern version, and we don't have a huge farm to manage so I think it would work perfectly for us. Looking forward to learning and implementing it! http://en.wikipedia.org/wiki/Comparison_of_open_source_configuration_management_software   With AV we are runnin some tests with Clamwin, and we'll see how well it can pickup incoming viruses uploaded via our Resin application servers. We shall see. Next project will be bringing a CentOS yum repo locally and patching over the LAN versus the internet, another project I've wanted to get done, but haven't yet completed.

Cloud usage

Could, cloud cloud, the cloud is HERE! I love articles like this http://alan.blog-city.com/has_amazon_ec2_become_over_subscribed.htm# Really interesting read, and makes a lot of sense. The problem is that you don't have visibility into what they are running and how well its running. Running my business on a completely unknown infrastructure would be a no no. I understand the internet is a big shared infrastructure, but wouldn't you rather know the specifics about the performance and capacity of your infrastructure. This is an interesting thing to think about before you move apps all over the cloud, and have to manage a mess. I do like some of the monitoring which is being done by products like Zenoss : http://mediasrc.zenoss.com/documents/Using+Zenoss+to+Manage+Cloud.pdf . I like Zenoss a lot, but haven't done a full deployment of it yet. I might have the opportunity to do that soon!

Storage upgrades - Netapp

We're doing some storage upgrades, basically I need to build out a load testing environment, and since our application is pretty much database trapped I need to invest a bunch of cash in more big iron and more spindles. I need quad – 6 core db servers, which is what we have in production right now. I also want to get 24 spindles of disk, which is what we have in production as well for the database. The disks will mostly be used for the loadtesting environment, but they will also be used for some of the other projects. It's too bad our netapp FAS 3040s have all the slots full (10g, PAM, then the rest FC cards). I need to buy FC disks and DS14 shelves versus buying the newer SAS disks which I would rather have, pretty annoying. Anyways I am buying 28 disks, and its costing me around $54k for the disks. So if you add up the disks that's 8.4TB raw (but the spindle count is what really matters), which means I am at about $6.30/G of data, which is pretty poor. That doesn&