warthog9: Warthog9 (Default)
2014-12-17 09:03 pm

Sony Xperia Z2 Tablet - or wow I'm horrible at updating this blog

I've taken to more of my medium length content being over on G+ these days, but I wanted to detail some things about my recently purchased Sony Xperia Z2 Tablet (GOOD LORD that's a mouthful!). I'll keep the gushing over the hardware to a minimum (it's a lot nicer than my old Xoom, that's for sure). So here goes!


Yup, it's amazing, it's hard to argue with it much, it's thin, it's powerful, the screen looks nice. I do have some annoyances though, and they are just kind of stupid mistakes
  • Charging off of USB is *AMAZING*, seriously this fills me with such happiness it's crazy. What would have been BETTER is to have had an option to either wirelessly charge (Qi) or to make a USB to magnetic connector recharging thingie. These have been created by 3rd parties (I just ordered one of the usb to magnetic adapters, we'll see what I think) but it's just kind of odd that it wasn't dealt with up front. The reason I'm grumbling at all is the little door you have to open to get at the usb port (for charging) is a little obnoxious to open, and I could see it being really obnoxious if you didn't have any nails. Overall it's a minor nit-pick but yeah.
  • The headphone jack on the *BOTTOM* of the tablet? Seriously? So the obvious problem there is, what do I do when I'm at 30K feet on a flight? I suppose I can flip the tablet over entirely in it's case (I have the official Sony one, it's reasonably well thought out. Minor complaints there too), but yeah, that's annoying but not ideal. I've got a flight coming up, I'll "test" it and get back to this on G+, I likely will be annoyed
About it really, the hardware more or less speaks for itself beyond that.


This is one area that many Android vendors futz up thinking that they are "adding value", what they are really adding is fragmentation and useless junk that usually undermines their own product. Sony, surprisingly, got the message somewhere and left nearly all of the OS stock, with minor tweaks here and there. Well done Sony!
  • The launcher / app drawer that Sony provides is actually quite nice despite it not being the stock ones. It's got some nice features that I would honestly like to see in the stock Android launcher:
    • Ability to search the apps
    • Re-order the apps (even a custom ordering)
    • Folders in the app drawer

Now this isn't to say that the app drawer is perfect...

  • What's wrong w/ the app drawer:
    • It wastes a fair amount of space on the page by not having the apps quite as dense. Another column or row would go a long way to alleviating that
    • I like on the stock launcher how if you long press on the app you get the option to put it on the home screen, uninstall or see more information. it's a little thing, but I'd love to see that added to the Sony launcher (it has similar functionality it's just not quite as intuitive)
Actually surprisingly I think that's about the only thing I can complain about there. I have the option to use the Google Now Launcher, but I'm going to stick with Sony's for now.

The other apps that deviate from Stock:
  • The Album / Gallery software is custom to Sony. It works, I have no major complaints about it, though I wish I could sort things a little bit better, it seems to just default to a very long sequential list by date, which is not the way my brain thinks about a lot of my photos. It does have the ability to connect to a network device and browse data there. I'm guessing it's only DLNA only but it's something!
  • Walkman / Play now / etc - Lets be honest here it's a music playing app, there's only so much here you do to differentiate. It looks like it supports all the formats I care about (mp3 & flac), the interface seems ok. It lacks genre tag support, so I'm not overly thrilled with it. Overall it looks fine. It's easily replaced by other things if you don't like it
  • Movies - again a custom setup. It's got a couple of nice bonuses, a couple of movies that I had on my xoom for testing that never played, now play on the Sony, so yay for expanded codec support. Looks like it lacks full mpg support though, not a show stopper obviously but something to note. It does have the ability to look up data from Gracenote and add extra meta-data, nice but again not critical.
  • The only other thing I've run into that's obnoxious is they "customized" the right side pull down. I expect that will go stock in Lollipop, but their quick settings thing is ok, it lacks the dynamic nature of stock, and things like Chromecast's cast entire screen will never show up in it. I'd argue that's fairly major demerits there.
The rest of the OS looks and feels stock with a few minor additions or changes, so kudos to Sony for that one.


This is where the "fun" began for me.  Now many people will question why I care about root on my devices.  The biggest of which is that I run nightly backups of the devices.  It's silly, particularly for devices I'd consider generally ephemeral, but for some reason I don't see it as optional.  On my tablet I've also found having root dim very valuable when reading at night in bed and the lowest brightness setting is too bright for a fully darkened room and the tablet is less than a foot from my nose.

Some caveats

So I clearly didn't read all the documentation before I plowed into rooting my device, and it turns out Sony has something "special" about how they do things, namely there's a magical place called the TA partition, specifically (seemingly) the "Trim Area".  On most sane devices (Read: Google Nexus devices), you unlock the boot loader, flash a new recovery in, load up a flashable su app and you are off to the races.  It's easy, it's painless, you don't have to worry too much.

Sony on the other hand hides a bunch of things, like DRM keys, in the TA and specifically *WIPES* that area out when you unlock the boot loader.  These are things I think should be stored in a TPM, or some other chunk of hardware where even if I root around I can get at the base keys, but most ARM chips don't have such a thing, so that's where Sony put them.  The other thing this seems to govern is the warranty.  When you unlock the boot loader Sony uses words like "May void your warranty" or that warranty service may incur an additional charge if you unlock your bootloader.  In reality it looks like by unlocking the bootloader you DO void your warranty, which I'm actually VERY disappointed in Sony for.  It's their choice on that, but using words like "May" implies to me that if I screw it up badly enough from a software perspective I'm toast - I'm ok with that, but if the screen has a normal warranty-able issue (it seperates or something, I don't know), that I'd expect to be covered normally.  It also looks like, once you've unlocked your device you can't re-lock it.

Now I've said all of that, mainly, because I plowed forward and was stupid, didn't realize I needed / could backup the TA and it's all blown away.  The trick that people are finding is that if you back it up you can restore it and make it look like your device was never unlocked at the boot loader.  Ahhh but you ask, how can you back up the TA without root before you've unlocked the boot loader?  You got it, the 4.4.2 firmware is exploitable to gain root without unlocking the boot loader.  Do this, back up your TA and you are golden from then on out.

Ehhh Cest la vie, I've either gambled well and the device will be awesome and last me 3 years or it will destroy itself angering me and since I potentially have no warranty, I'll think twice about a Sony product in the future.


Grief recovery is borked on the Sonys, not just a little, I mean A LOT.  Someone clever decided to deviate from the way many of the other devices work and instead of having a separate partition for the recovery (like Nexus devices) they stuck it in with the boot partition and made it work from there.  This means it's a pain to work with and that there's only really one recovery method: get XZDualRecovery to work.  This isn't *ENTIRELY* true but read the 4.4.4 section on why this is borked and I hate it.


To keep XZDualRecovery (and make all that magic work) you need to keep their version of Busybox, so if you are like me and have Busybox Pro or something like that DON'T install it after installing XZDualRecovery.  XZDualRecovery seemingly needs lzma support to make all it's magic work, and that's not a common thing to have on other busyboxes.


This was basically a no brainer, run the various scripts around, they root the device, your golden.  Nothing but net!  caveat above on Busybox and you are golden.

Well you are golden until you decide "Hey 4.4.4 is out for the X2 Tablet SGP512, I should upgrade - heck even the tablet is telling me I should upgrade!" WELLLLLLLL, it seems Sony didn't like me rooting my tablet and the OTA servers stopped telling me there was an update, and the PCC (Sony's desktop app) refused to upgrade the device claiming it didn't like my modified software.

Can't say I was overly impressed there.  Eventually I just downloaded the right files, found a program called Flashtool and got it upgraded.  I expect I'll have to do roughly the same thing for Lollipop when it comes out.


So when I upgraded to 4.4.4 I lost root, and started a quest to get it back.  The short answer is it's NEARLY impossible, and almost assuredly impossible with a non-unlocked bootloader.  To start with, the recovery stuff you had setup in 4.4.2, yup - wiped out.  So step 1 was getting a recovery going.  The only recovery I could get to work were ones that were incompatible the rest of the OS.  Bugger so my process eventually came down to:
  1. Flash incompatible cwm-z2-tablet.img (search for it on Google)
  2. Boot to the new recovery (you can't boot anything else, it'll just end up in a boot loop)
  3. Take a program called PRG (Pre-Root Generator) which you can take the base files you would use for Flashtool and add in XZDualRecovery and a flashable super user of your choice and bolt them all in.  Pay close attention to include the kernel, as that will over write the recovery image that doesn't work.
  4. adb push the file generated from 3 into the sdcard of the tablet
  5. install the zip on the tablet
  6. Reboot
It's messy, but it worked, it got me root and recoveries work.  Things that don't work
  • When SuperSu updates from Google play and tells me it needs to update the su binary, the only way I found to make it work was to download the flashable and do it from the recovery manually.
  • I suspect other things that will be in similar shape
  • /system seems to be locked to read-only when not in a recovery mode


Tonight I realized (while trying to copy some movies over), I seemingly broke MTP somewhere.  It tries to load but windows claims the device is broken (I've checked it with several other devices, they are all good), Linux gives a similar explosion.  easy enough to pop the sd card and copy the files over via usb3, but just odd

warthog9: Warthog9 (Default)
2012-09-04 09:05 pm
Entry tags:

K-9: Part 1 - The insanity begins

So as the title implies, I'm starting down a path of insanity.  Namely I want to build a K-9 from the old Dr. Who series.  Now, it being me, I can't just *BUILD* a simple one, ohhhh no - clearly my sights aren't set high enough.  My intention is to build a K-9 capable of autonomously following me around (possibly with the assistance of various beacons, gps, the internet, etc), and doing so in a crowd.  And if that wasn't even enough, I'm hoping to get it to the point where it can climb stairs and ride up escalators.

Ok *NOW* my ambitions are set high enough (and my level of insanity is thus proven).

So I've started the planning of this project.  I've been able to snag a few fan-based basic designs for K-9 off the internet.  Rough base dimensions put the little guy in at about 18in wide, 18in tall at the shoulders and just shy of 30in long (not including tail or head).  So K-9 isn't terribly big (smaller than I expected honestly).  So with that found out (and I'll probably be insane and double check them based on extensive video watching), on to locomotion.

I could do this with an insanely simple 4 wheel drive setup, it would work, be relatively simple, and still mostly achieve my goal.  That said it won't climb stairs, so out the window that goes.  A quick survey of stair climbers reveals a few base types, most of which utterly suck when going over a flat surface.  I also have the added constraint that K-9's base doesn't (in canon) angles upwards ala a pyramid, which makes concealing things a bit harder (and it means there's a couple of drive types that just won't work.  My current thinking is to go with a set of tank like treads that can climb stairs (a la the irobot packbot), and possibly provide a hydrolic backstep (horizontal to the base) to level out on something like the escalator:

I think step 1 for now is going to be to get a base with some tank treds on it and start experimenting.  This does mean I need to investigate motors, motor controllers, and tank treads and dedicate something to the initial running of those pieces.


warthog9: Warthog9 (Default)
2012-08-15 11:24 am
Entry tags:

All good things...

For the better part of a decade, I've been working on kernel.org. About half that time, I was a completely uncompensated volunteer, as that's how kernel.org was run until 2008. The remainder of that time I did the same work as a Fellow under the auspices of the Linux Foundation. When I say work, I mean much more than just holding a 9-5 job; I mean pouring blood, sweat, and tears into the inner workings keeping kernel.org running.

Thanks to the Linux Foundation, and the individuals and companies that support them, I've been able to work on kernel.org full time for the past 4 years. However, I have been doing some soul searching lately and I've realized the time has come to move on.

I'm sad to be leaving kernel.org - I've met a lot of people and made friends within the many communities I've dealt with over the years. (For the record, all of you are all still invited over for bad movies and dinner anytime!)

kernel.org has always been more than just work for me. It’s been a passion and it will always hold a special place for me. It was never so much "work", as work has the wrong connotations. It’s been the very essence of the phrase “labor of love” - something I did because I loved doing it. The users and community are amazing; I will always remember when I asked for folks to gc their git trees and the kernel developers independently turned it into a contest to see whose tree shrank the most.

That being said, I'm going to take the rest of August off and relax, catch up on some open source stuff I haven't been able to get to (PXE Knife, maybe working on the stateful rsync stuff I've been pondering for years), and start considering what comes next. I will be down for a couple of days at LinuxCon, and then I'm off to the Penny Arcade Expo (PAX) up in Seattle late on the 30th (which also happens to be my birthday).

Beyond that, it's time to see what the future holds.

- John 'Warthog9' Hawley

warthog9: Warthog9 (Default)
2011-07-24 05:06 pm
Entry tags:

Rental Car Review: Dodge Avenger

So as I've been traveling a lot this year, I've also had the need to get a number of rental cars for various reasons.  Mostly because I've been traveling well outside the ranges of public transportation on where I've been going.  I thought I'd start writing more public commentary on those experiences.  Today installment: the Dodge Avenger R/T


Overall Grade: C
Would I buy it: No

Full Review )
warthog9: Warthog9 (Default)
2011-03-20 01:42 pm

AT&T + T-Mobile: Good lord, the US will become Canada!

 So if you haven't heard the news, AT&T has/will be buying T-Mobile US.  This looks like it's pretty much a done deal, save the typical regulatory hearings, but I don't see how any road-blocks can be thrown at this to make any difference.  Irregardless of what carrier you are on, T-Mobile, AT&T, Sprint or Verizon this is horrible and tragic news.

If you don't know why it's so bad, just look a little North beyond the border to Canada where, until very recently, there was a duopoly in the cell phone market.  On the surface of it there have been 4 carriers in Canada: Rogers, Fido, Bell, Telus.  Rogers directly owns Fido, which is hilarious as many Fido stores are opposite their parent company Rogers stores, causing many customers to vehemently describe the horrible service they get from Rogers to Fido employees as they "switch" carriers.  This leaves Bell and Telus, which are independent companies but are so linked at the hip in the cellular space that they more or less mimic Rogers and Fido.  There have only been two entities that control the entire market, thus a duopoly.

What did this do for Canada?  I gave them some of the highest rates for cellular communications on the planet, gives them horrible service, a stagnant market and generally provides a complete disservice to every Canadian who uses a cell phone.  Now I mentioned above that there have been 4 carriers in Canada, in 2009 Wind Mobile came upon the scene, offered better service, cheaper prices, great customer service and everything else you could want.  Guess what happened?  People flocked to Wind Mobile like mad, and the existing duopoly immediately started attacking them in any grounds they could, in particular legally.  But it also forced those same stagnant monsters to start moving again, they've cut prices, are offering better service and generally doing a little better.  It's still pretty rough being a Canadian with a cell phone, but it's at least not quite as bad as it was.

Now getting back to the mess at hand, AT&T buying T-Mobile, what does this mean?  Well lets start by equating T-Mobile (US) to Wind, they offer more competitive and CLEAR pricing, give good customer service and generally overall are well liked by their customers.  They don't have the same coverage that Verizon does but it generally just works.  Heck, T-Mobile US has even been recognized, 3 years running, as one of the worlds most ethical companies.  This has kept AT&T, Verizon and Sprint competing against T-Mobile, as well as amongst themselves.

When AT&T closes the deal with T-Mobile you can expect all of the things that make T-Mobile worth worrying about, customer service, price and their ethical stance, to go out the window.  AT&T will raise prices, their customer service will go exactly the same way that Cingular's did when it was acquired, and generally things will get worse for T-Mobile customers.  No surprise, it just sucks to be staring down that barrel.  However this WILL make things worse for everyone.  How?  Because the US cellular market is already walled gardens, this just eliminates one of those gardens.

In the US there are 3 sets of cellular tech being used.  There is what's refereed to as CDMA, which is the basis for Sprint and Verizon.  And there is GSM, which is the basis for AT&T and T-Mobile.  The only difference between Sprint and Verizon is that neither will allow things on their network unless they explicitly approve it.  This means if you have a Sprint phone and go to Verizon, even though the phone will work perfectly fine and identically on Verizon's network, they won't let it on because it's Sprint branded.  This is also true in the opposite, they each have their little sandbox and they absolutely don't want to let anyone or anything from the outside in to play unless they approve it first.

AT&T and T-Mobile are slightly different, they use GSM at their core, which functions a little differently.  Mainly the authentication mechanism for their networks isn't specific to a phone, it's attached to a sim card, which can go into any GSM phone.  This has the advantage that I can get service from any carrier (they give me a sim card) I throw it into the phone of my choosing and it just works.  Switch phones, move the sim, still works, etc.  There is nothing stopping you from taking an AT&T sim card and putting it in a T-Mobile phone, or vise versa.  There is a small catch in the US though: AT&T and T-Mobile use incompatible, and completely different, 3G frequencies.  So if you are using data on a T-Mobile phone with a sim from AT&T you'll get degraded data performance, and vise versa, HOWEVER you will get data and phone calls still work.  I should also note that CDMA is only used in the US and a small handful of other places, GSM is by far dominant worldwide.

So when AT&T buys T-Mobile this eliminates and open, and friendly carrier, replacing it with a giant behemoth that doesn't care, and leaves you with two other carriers that are walled gardens you can play in.  Want to guess what's going to happen?  Everyone will get more expensive.  Things you enjoy now, will cost more, innovation in phones will get harder and you will be more at the whim of your carrier for what they are willing to support.  The iPhone didn't come to T-Mobile but you could use it there when it came out.  What happens when AT&T get's another exclusive phone, now you can't use it anywhere in the US but with AT&T.

This really does just make the entire US cell phone market a bigger nightmare, and will just foster further anger, resentment and dislike of cellular carriers overall.  We are heading towards the same fate already enjoyed by our Northern brethren: higher costs, less service and falling further and further behind the rest of the world and being mocked for our backwardness.
warthog9: Warthog9 (Default)
2011-02-27 12:43 pm
Entry tags:

Mediacom: or how an ISP is blatantly violating my privacy.

So some quick background information, I'm visiting my parents back out in the middle of the USA and they happen to have a fairly reasonable internet connection provided by Mediacom Cable.  My parents only really have two choices for high speed internet, Mediacom Cable and AT&T DSL.  DSL should be awesome, but they live literally on the edge of town, so we have not always had a good path back to the copper loop - so cable it is.

In the past few years ISPs have been abusing their power with DNS and doing NXDOMAIN (domain not found) redirects, mainly so that they can gain additional AD revenue from redirecting you to their search pages / engines.  This, while annoying,  is trivial to get around as we can run our own DNS servers, or make use of Google's, Level3's or OpenDNS dns servers as opposed to the ISPs.  Sometimes the ISP goes slightly beyond this by proxing the DNS results, but Mediacom isn't guilty of this ( that I know of).  Mediacom is guilty of doing NXDOMAIN hijacking, but I long since switched off of their DNS servers as I blatantly don't trust them.

While NXDOMAIN hijacking is evil, and I'm not alone in that belief, what I just found out today blows "evil" out of the water.  Mediacom is doing deep packet inspection and is trapping web page 404's (file not found) and redirecting you to a Mediacom search webpage.

What does this mean?  It means if you type in http://www.example.com/i-miss-typed-something you should get a webpage that says "404 file not found", and should return the 404 status code in the header.  In particular this error code is important for automated scripts, and for letting your browser know that something went wrong.  Instead what is returned is a webpage that redirects you (via javascript no less) to a Mediacom "search assistant" specifically http://assist.mediacomcable.com/mediacomassist_pnf/dnsassist/main/?domain=http://www.example.com/i-miss-typed-something meaning your browser never knows something was wrong, and you don't get to a page that you were looking for.

The only way for Mediacom to do this is to proxy all of their web traffic, and to inspect it as it's in flight.  Why is this a major issue?  It means that Mediacom is literally looking over your shoulder on every website you are viewing.  This gives them the ability to do things like reading your bank account passwords and knowing what medical information you are searching for online.  Mediacom, by definition, has always know what other machines you are talking to online, but they are now actively listening into the conversations you are having.  (Note: this is all a moot point if you are using encryption, I.E. https thankfully since they can't do proxying on that - well sorta)

It also means that Mediacom is CHANGING information before it gets to you.  Right now they are modifying file not found pages, but what's to stop them from adding additional ads to webpages for their own benefit?  What's to stop them from transparently redirecting you to re-written articles that claim to be from the original source?

Mediacom gives an "opt-out" for this, which could have been fine, you change the routing table for the cable modem so that it bypasses the proxy and everything goes as normal.  However they didn't choose to implement it this way.  No the "opt-out" is a browser cookie that gets set for your specific browser, a browser cookie which is trackable in it's own right, and which doesn't actually prevent your traffic from passing through the proxy at all.

So far the only thing I've been able to find is that Mediacom is specifically paying attention to browser user agent strings, which are strings your browser sends to identify what it is.  Command line strings, like those for wget, elinks, etc seem to not be given the javascript page, but things like Chrome, Firefox, iPhone, Android, etc all seem to be given it.

To say this is an outrage, is and understatement, this is a gross violation of privacy and is nothing but a greedy and evil decision on the part of Mediacom.

If you are on Mediacom, I hope you find this and get a chance to read it, and that you would politely, but firmly let Mediacom know this is unacceptable.  It would also be good to write your Senator and Congressmen about this, and to file a complaint against Mediacom with the FCC.

Some additional links / information if you are interested
and I'm sure more can be found without too much issue.
warthog9: Warthog9 (Default)
2011-02-21 05:38 pm

Amusements in dependency trees

 So one of the things I've been doing is slimming down some of our installs, I mean why does a server need KDE installed?  Some of this goes back to the days when we didn't have things like 'yum' to just deal with dependencies and to install stuff on demand so to speak.  What did we do back then?  Well we did a "install everything" mentality.  This has some advantages, but as time has progressed it's become more and more of a liability and I'm slimming our installs down to what we need vs. everything.

That said I keep finding some amusing chains that make you scratch your head:
  • yum-updatesd (a server daemon that deals with automatically updating the system) requires libpng (the library to render png images)
    • Other things that depend on libpng:
      • dbus-python - D-Bus python bindings for use with python programs.
      • gobject-introspection - scan C header and source files in order to generate introspection "typelib" files
      • jed - console text editor
      • slang-slsh - Interpreter for S-Lang scripts
      • system-config-network-tui - command line / curses based network configuration programs
      • yp-tools - NIS (or YP) client programs
      • ypbind - NIS daemon which binds NIS clients to an NIS domain
      • yum-updatesd - Update notification daemon
  • ImageMagick (a series of command line image manipulation tools) requires avahi (the Zeroconf automatic network setup system daemon)
  • Everything and it's monkey's uncle is dependent on nss-softokn-freebl

Just random things I've noticed, it's mostly amusing to find how certain dependency trees act.  These aren't perfect representations, and there's usually weirdness involved in any absolute dependency tree, like the one RPM uses.

Just sharing my amusement with the weirdness that is my job.

warthog9: Warthog9 (Default)
2010-09-13 05:50 pm
Entry tags:

Kernel.org and the DDoS we nearly missed

So on Friday of last week I got a slightly frantic phone call from our US upstream data provider, ISC.  I completely missed the calls, but when I checked my voicemail I was a little surprised to hear:

"Hey John, so it looks like your the subject of a DDoS attack, we just wanted to let you know and we are going to start blocking some traffic at our switches for you, give us a call back."

Errr, what?!  As it turned out someone with a botnet decided to point it's impressive abilities at kernel.org by trying to flood it with completely random UDP traffic on any arbitrary port.  According to ISC they were seeing nearly 3gbps (yes that's giga-bits per second) of incoming bandwidth being directly targeted at the two machines that service www, git, android.git, mirrors.git and a number of other sites.  This could have gone very badly, but..

Strangely enough, no one reported any inability to get to the sites or problems getting data or anything.  Those two boxes were seeing their entire incoming bandwidth full of a lot of garbage and they just kept trucking along.  Loads didn't spike, memory usage stayed fairly consistent and we just kept going.

So my hat goes off to HP for donating us some dead rock solid hardware, those DL380 G5's we got a couple of years ago now are happily humming along being awesome.  I will also heave a sigh of relief knowing this could have gone a lot worse.  They could have targeted all of the machines, both US and Europe and both the www and mirrors boxes.  They could have targeted some of the equipment we have at Oregon State.

Thankfully what they targeted was capable of keeping up with the onslaught, and our upstream providers were able to handle the sudden jump in traffic!  For the record, I can't say enough good and awesome things about ISC being one of our upstream bandwidth providers and they handled the whole thing spectacularly.

Things thankfully quieted down over the weekend and stuff seems to be back to normal.  We are keeping an eye on the bandwidth graphs right now, but suffice it to say we survived!

warthog9: Warthog9 (Default)
2010-08-27 11:02 pm
Entry tags:

Algorithmic Numbers

Computer Science has only 3 numbers that matter.
  • 0 - There is nothing to worry about, this is easy!
  • 1 - Nearly as easy as 0, things are simple, doing things once isn't so hard & honestly this is how most people think.
  • Many / Infinite - This is where things get a little tricky. The step from 0 to 1 isn't a huge jump, but the step from 1 to Many is a doozy.
I've got maybe 60-70% of the infrastructure in place now to deal with parallel mediawiki installs on korg now, and a lot of grunt work done with respect to php.
warthog9: Warthog9 (Default)
2010-08-05 01:43 pm

Why the Internet will win

So a friend of mine got to chatting about cable television again, this is a topic we re-dig up every 6 to 12 months, take a quick gander around the Internet to see what's happened / changed and to see if the media companies have come to grips with letting people use the content they are paying for the way they want.  I mean there's already very LITTLE of that content I actually want, taking a quick glance at my steady and reliable MythTV box I have 12 distinct shows that will be recorded in the next week.  This can't be encouraging for the Cable companies who are providing me with thousands of options a day.

The Cable companies are also just annoying, first we had analog, everything could talk to it, DVR/PVR's, VHS recorders, Beta recorders, my TV, heck I'm sure there's a toaster out there that could read the signal.  Now the Cable companies want digital.  Ok I'm cool with that, less RF spectrum, higher quality, etc.  However now I need a magic converter box to go from digital to the analog everything talks.  Ok that's fine, this still mostly works, it's just a pain to setup and get running again since things aren't quite as simple as before.  Now I'm not harping on this, it sucks to have converter boxes, but the picture quality has gotten a lot better.  For instance black is now actually black - who knew!

Now there's a standard out there for doing digital only from end to end, this would eliminate my analog loop, rid me of the IR blasters and generally my picture quality should go up, and then I can participate in this new fangled High Definition video stuff.  This common standard is called cable card [or is it m-card this week, this seems to change pretty often], and even many TVs ship with support for this.  So I'm looking around at what's available outside of a TV, mostly speculatively, for magic-card based TV tuners so that I might be able to simplify my setup, remove the weird analog loop and generally just have a cleaner setup.

They won't let me however.

ATI came out with a cable-card ready tv-tuner.  Used USB, exported itself as a network device and only works on Windows because of DRM.  Ok I look at the Silicon Dust folks, makers of the HDHomeRun.  They have had an exceptional track record of supporting not windows, and even show MythTV as supported on their website for their original product.  This is awesome!  Then look at the Prime, their m-card based digital product, Windows 7 only.  Again more DRM that's happening and completely shutting out anything that's not willing to play with them.  Finally I found a newcomer to the whole mess the Ceton Corp's infiniTV 4, a 4 tuner, single m-card based pci-e card for your media center.  Ok this is starting to sound too good to be true, it's just about perfect - ohhh there's the gotcha and the catch - Windows 7 again.

*SIGH* ok so the media companies, well Cable companies anyway, basically hate me as a customer and are pretty much doing their best to tell me to go away, they don't want my money.  Then I look at things like Hulu, and the other online versions of the "cable company".  I can watch what I want when I want, both generally this isn't always true, and I can give them money for better quality or more access.  This isn't quite my DVR but it's starting to sound more like what I want.

I've had a saying for a few years about format wars, we are at the point where the Internet will win.
  • Blue-ray vs. HD-DVD - Internet will win
  • Cable vs. Dish - Internet will win
  • Analog vs. Digital - Digital on the Internet will win
I wish companies would stop trying to create unusable walled gardens, I don't mind paying for content that I deem is worth it, but please let me use that content for my own personal use the way I see fit.  The music industry has realized this, why can't the visual space catch up and get with the program?  If not, the Internet will win - and I'm kinda ok with that.
warthog9: Warthog9 (Default)
2010-06-15 03:03 pm

Adobe Flash - or why Adobe obviously hates it's customers

Ok so the title is a little over the top, I'll admit, but  lets be honest: Adobe hates you because Flash continues to suck.  There's a whole host of reasons, from it chewing up your CPU because it can, to exhausting your memory to offering the universe a million *NEW AND IMPROVED* ways to hack your computer.  However ignoring all of that, Adobe is right Flash can quite literally be equated to being the web.  Without Flash the web is a pretty desolate place: no youtube, no streaming video, many streaming audio sites won't work (I'm looking at you CBC! Do you know how much work I had to do to find your icecast streams again?!?!), it's quite a mess.

And while I think that Apple making a stand against Flash is a good thing, open standards are always better, more long term and overall better for everyone, Apple is wrong for blocking it coming to it's devices, in particular the iPad, it's a bloody internet tablet that's been done over and over again, Apple just has the right brand of magic smoke that people are inhaling to think this is both new and revolutionary and they should buy it now, if not sooner.  But I digress from what I really want to complain about.

What I really want to complain about, is the fact that while Adobe thinks of itself *AS* the web, you would expect it to actually keep up with technology and be quite supportive of everything, right?  WRONG - there was recently quite a spat of exploits in Flash 10.0.xx (and everything before I think) that meant that hacking your box was as easy as sneezing.  Flash basically provided a way to run arbitrary code on your computer, as the user your logged in as (meaning if your on Windows and running as root / Administrator it had full access to your machine, if your on a sensible OS that has strong user protections like *NIX or Mac it meant it could trash your data, but not much more).  Fine, dandy release an update and we can all move on with our lives... Well they did, except in doing so they ELIMINATED support for 64-bit flash, which even though it was "alpha / beta / test / don't use this" has been way easier to deal with and rock solid for me under Linux.

They claim it will come back, but honestly this is like telling everyone who was having a good time actually proving their product didn't suck that they don't care about them.  We've had 64-bit x86 cpu's for something like a decade now, we've had good support for 64-bit code on Linux for nearly that long, and we've had good browser support for 64-bit for at least the last 5 years (mainly because I've been running 64-bit browsers and been quite happy with it, thank you very much) so why can't Adobe figure out how to build a 64-bit version of flash and properly support it?

Because Adobe hates it's customers, plain and simple.

Pardon me, while I go figure out how to wrap flash in a haze of indirect wrappers just to make it work, and so I'm not vulnerable to the internet and can go back to watching my youtubes.
warthog9: Warthog9 (Default)
2010-05-09 11:27 am

Great moments in coding amusement

 So I've been spending a lot of time re-working this database backed website, and I'm moving towards it going 'live' so I've got a copy of the database and I'm running the modifcations.  One of the queries I had goes something like this:

mysql> -- This will likely take something like 5 minutes?
mysql> update table1,table2 set table1.guilty_id = table2.id where table1.guilty = table2.name;
Query OK, 3128414 rows affected (1 day 9 hours 14 min 53.02 sec)
Rows matched: 3128414  Changed: 3128414  Warnings: 0

Something tells me the first time I ran that query, and it took only 5 minutes, was a fluke!

warthog9: Warthog9 (Default)
2010-04-29 10:56 pm
Entry tags:

Hamming it up!

 So HPA has been advocating that I go and get my Amateur Radio License for several years now, and it's been sitting on my todo list since then.  He finally suggested, to a group of us, that he would do a one day class on Saturday and then on Sunday we would head out to take the test.

So this last Saturday we all headed over to HPA's house, and had a long hours of cram session.  It was fun, though we covered a lot of material in one day.  We had Leslie Hawthorn, HPA's wife Suzi, Eric Biederman and myself in the class, and HPA was assisted by Leslie's main squeeze Jack.  Was awesome!

We then did the utterly insane thing, and traveled from the South Bay up to Oakland California and were at the Oakland Fire Department at the wonderful hour of 9am.  Yes you heard me right, 9am which meant that I was up at some ungodly hour and drove up to Oakland, after picking up Leslie, Jack and Eric!  Was a fun carpool up, and then we took tests!

Turns out we all made technician level Hams, I got close on General, but not quite.  Dropped Leslie and Jack off at the Stinking Rose for them to have Jack's birthday lunch, and Eric and I celebrated by snagging lunch at Boudin and going for a long walk from the warf up to the bridge and back (and me hitting my head hard on a tunnel, ouch!)

But it was a fun day all the same, and now I'm officially licensed to HAM it up ;-)
warthog9: Warthog9 (warthog9)
2010-02-19 01:28 pm
Entry tags:

Exchange Hosted services or how SSL broke e-mail

UPDATE July 6th, 2011: Updated to include a *POSSIBLE* fix from Microsoft about the problem.  I have no way of testing or confirming it, but it sounds like it has potential.
UPDATE March 2, 2010:
Updated to include, now known, actual cause of the problem -JH

Recently I've had the exciting job of debugging a massive e-mail failure between kernel.org and a large hardware manufacturer (LHM). The LHM has been particularly nice about this, understand the problem and has been kind enough to do my evil bidding while I try to debug this whole problem with only one side of the entire equation to work with.

Here's the problem: The LHM has contracted all of their e-mail out to Microsoft's Exchange Hosted Service (EHS). Generally speaking this is fine, it saves LHM a lot of money in their IT budget, the mail servers are more up to date, generally more secure, etc. Really I don't have a lot to complain about this choice - except - when it fails.

So the long version of this story is, back in December LHM's couple of employees who need to e-mail kernel.org on a regular basis started noticing that their e-mails weren't getting anywhere, and in fact where getting bounced back at them with some strange errors. They didn't think *TOO* much about it at the time, but the problem persisted and around mid-January they came to kernel.org and asked what we were seeing. I figured it was just a problem with our greylisting, we are a little more aggressive about it than others and it catches up some mail servers. So I did #include <STD_EMAIL_GREYLIST.h> response and expected the problem to go away. Sadly it didn't, so I opened up a proper look into what was going on.

LHM's employees were quite forthcoming with what they knew about how their e-mail worked, mainly that it was a giant black box that no one at LHM could see into because it was a Microsoft run service (known as Bigfish). I asked kindly if they could get me the logs from their box, or at least all of the error messages and I started delving into my logs.

That's when I noticed something very weird.

Feb XX YY:58:03 hera sendmail[32520]: STARTTLS=server, error: accept failed=-1, SSL_error=5, errno=104, retry=-1
Feb XX YY:58:03 hera sendmail[32520]: o1JKw24C032520: va3ehsobe005.messaging.microsoft.com [] did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA

Hmmmmm ok, that's odd. No really that's very odd... This had been happening since Decemberish or so. Great. But I was having the employees from LHM e-mail both kernel.org, and my personal domains and the e-mails to my personal domains were going through - why?

I started checking the certs, I started monitoring more things then I paid slightly closer attention to my personal server's mail logs, my primary mail server was exhibiting the same problem that kernel.org was - ok at least I'm not insane, but why was I getting the e-mail? I looked at my secondary mail servers, and one of them - my longest running box

# uptime
13:02:45 up 1003 days, 15:27, 1 user, load average: 0.06, 0.11, 0.09

Was actually receiving the mail, and actually accepting it. It would spool on that machine and then push to my primary without issue. Hmmmm......

So I turned up the debugging levels on kernel.org and had LHM's employees send me more e-mails. This is what I found:
Feb  4 07:17:37 hera sendmail[22509]: NOQUEUE: connect from
va3ehsobe005.messaging.microsoft.com []
Feb  4 07:18:13 hera sendmail[22509]: AUTH: available mech=NTLM GSSAPI
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: Milter (clamav):
init success to negotiate
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: Milter
(spamassassin): init success to negotiate
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: Milter (greylist):
init success to negotiate
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: Milter: connect to
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: milter=clamav,
action=connect, continue
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509:
milter=spamassassin, action=connect, continue
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: milter=greylist,
action=connect, continue
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 220
hera.kernel.org ESMTP Sendmail 8.14.3/8.14.3; Thu, 4 Feb 2010 07:18:13 GMT
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: <-- EHLO
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509:
milter=spamassassin, action=helo, continue
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: milter=greylist,
action=helo, continue
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: ---
250-hera.kernel.org Hello va3ehsobe005.messaging.microsoft.com
[], pleased to meet you
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: ---
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 250-PIPELINING
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 250-8BITMIME
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 250-SIZE
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 250-DSN
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 250-ETRN
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 250-AUTH
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 250-STARTTLS
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 250-DELIVERBY
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 250 HELP
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: <-- STARTTLS
Feb  4 07:18:13 hera sendmail[22509]: o147Hb7C022509: --- 220 2.0.0
Ready to start TLS
Feb  4 07:18:14 hera sendmail[22509]: STARTTLS=server, info: fds=6/4, err=5
Feb  4 07:18:14 hera sendmail[22509]: STARTTLS=server, error: accept
failed=-1, SSL_error=5, errno=104, retry=-1
Feb  4 07:18:14 hera sendmail[22509]: o147Hb7C022509:
va3ehsobe005.messaging.microsoft.com [] did not issue
MAIL/EXPN/VRFY/ETRN during connection to MTA
and doing some quick deciphering on the error returned by sendmail's
attempted TLS connection I get the following:
	for more on what SSL_ERROR_SYSCALL entails.  Short
	version is that it's an error reported by the under
	lying I/O layer, and to check what errono was returned
errno=104 | ECONNRESET
	#define ECONNRESET      104     /* Connection reset by peer */
	/usr/include/asm-generic/errno.h which is provided by the Linux
	kernel itself (as it's handling the TCP/IP stack here)

Long story short, this looks like bigfish opens up a connection to my mail server, attempts to start TLS (secure connection) and then bigfish sends a reset and sendmail goes "Ergh, well I guess we didn't want to chat afterall".

I've sent all of that to LHM, and as weird as this is going to sound - I'm now working through Microsoft's support side of things (I think I'm in Layer-2 with this, the Layer-1 guy was keen enough to realize we were way outside the depth of his script and bumped us up) to get this resolved. It's kind of weird to be the Chief Administrator for kernel.org and be debuging Microsoft's Exchange mail server for them - brings a smile to my face in a lot of ways.

Some additional details should others be seeing this or want to know more:

  • Openssl versions 0.9.8g and before seem to work fine, this would include my ancient Fedora Core 5 secondary mail server
    • Redhat Enterprise Linux (RHEL) / Centos 5 ship with Openssl 0.9.8e (this seems to work)
    • Debian Lenny is at 0.9.8g (which was confirmed by an employee from LHM to work)
    • Fedora <= 10 ships with 0.9.8g or earlier (This is known to work)
    • Fedora >= 11 ships with 0.9.8k (This is known to NOT work)
    • Ubuntu 9.10 may have 0.9.8k in it - this is known NOT to work
    • Opensuse has 0.9.8k in it - this is known NOT to work
  • Openssl has not been updated on my Fedora 11 or Fedora 12 boxes, this is not a change on my server side as there have been no updates to those packages that have been pushed down through Fedora's update process.
  • Disabling TLS negotiation in sendmail alleviates the problem, but I'm not keen to disable it universally if I can.  I generally think that TLS is a good thing, and that MTA <-> MTA communications should at least have some modicum of security (even if it's not perfect)
  • There are Microsoft domains that can send e-mail to these systems fine, they however do not attempt TLS/SSL negotiation
I'll update things here if/when I found out more information.  I'm honestly unsure that anything will come of all of this, I've provided Microsoft with packet dumps of the connection attempts and I've given them all of the information above.  I'm not entirely sure how much more debugging I can do for them (Microsoft) on this issue just because I'm not sure there's anywhere else for me to go at this point.

I'm going to post this in the hopes that if someone Googles for some of this they have at least some explanation of what's going on.  I would also argue that the only reason this isn't more wide spread is Systems Administrators have a tendency to be stodgy and slow moving with their mail servers, and I doubt there are really that many out there running something as new as Fedora 11, I would guess the vast majority are on something like RHEL/Centos or Debian which aren't affected - yet.


Ok so I got lucky on this one and I can now point out the exact and specific issue with the whole thing.  I've struck out a chunk of data above as it's not relevant any more (and actually wrong now that I know the exact cause).

So after going through all of the above, the LHM helping push Microsoft on the issue and for the record, to date, I still have not seen a *SINGLE* line of log information from Microsoft, I've seen no good analysis from them and to be perfectly honest I ended up in a Copy/Paste war with them while they were trying to analyse the logs I had sent them - which I thoroughly walked through the whole thing with way more detail and insight and Microsoft latched onto something completely unrelated and blamed my whole SSL negotiation issue on Sendmail's inability to write out statistical data.  Needless to say I was unamused and very very frustrated with the whole situation.

However, I got lucky.  One of the other kernel.org administrators (Kees Cook specifically) happened to pipe up on the problem, and comment that it looked exactly like a problem he had only recently been debugging with OpenSSL and GnuTLS.  After some quick checking I can confirm that the problem *IS* a bug on Microsoft's side, but it's one that's caused by a specific setting on the end that Microsoft is talking to.

The problem, as excruciatingly detailed above, comes into play during the SSL/TLS negotiation, specifically in the Certificate Authority section.  When you send the certificate you also send the Certificate Authority certificate, both for verification of the CA but for verification of the cert itself.  The default on most systems (particularly Fedora) is to send the entire CA bundle that the OS ships with.  This is a sledgehammer approach that should work for most everyone, and it's understandable why it ships this way: Most CA's are included in the bundle and it just works.  The problem comes in when the size of the bundle gets too large.

On my older Fedora secondary mail server (the one with the obscene uptime) has about a 418K in size, on my F11 boxes it is 654K.  Not a huge change, but it's enough to tickle this problem: when the CA cert grows too large the remote side wigs out and can't handle it.  This was seen by Kees with GnuTLS, when it got the bundle it would more or less exhaust the buffer and just die.  Thankfully he had access to both sides of this problem and could reasonably debug it.  I did a couple of small checks and low and behold - that was the problem.  So what I ended up doing was changing our CA cert that sendmail is passing back to be our same certificate.  Why?  It's a self signed cert, and we don't really have a CA that's signed it so I'm not terribly worried about it and *VERY* few (probably no) mail servers try to verify the cert before sending the mail, mainly because mail is handled at a central server, there's no one to verify the cert manually.

So there you have it, the bug is present in EHS, it's trivially fixable by the end that is Microsoft is talking to (in my case kernel.org) but for this problem to really go away it needs to be fixed in Exchange, I.E. Microsoft needs to fix this.  I've handed all of this information to Microsoft and the LHM, MS assures me there is a bug filed with the Exchange team on it but there's no ETA on when this will be fixed - my guess is somewhere around 2020, but I might be being pessimistic.

This now serves as a one stop source for this problem, hopefully others will find it should they run into this problem - I can only hope.

Was just contacted by the company that had the problem originally as a potential "fix" from Microsoft may have some impact on this:


What's interesting about it is that it talks about TLS/SSL fragmentation.  Now that said they do talk about the negotiation " TLS/SSL handshake messages become too large to be contained in a single packet " which sounds a little odd, and I garauntee you the stage at which we are getting the CA cert is being sent over many packets (CA cert that worked being 418K and the max MTU is 1.5K).  Also TLS/SSL for most uses is used via TCP/IP where you aren't particularly worried about the size of a single packet.  But I begin to wander and digress.

If you find this, and you are having this problem give the KnowledgeBase article a read, possibly even give it a shot.  If it works let me know, I would actually be fascinated to know.

Thanks to renormalist for giving me the heads up on the KB article!
warthog9: Warthog9 (Default)
2010-02-19 12:49 pm
Entry tags:

Of Markets and mayhem && Ode to a tab key

So I've had a problem adapting to the Nexus One these past couple of weeks. It works great, it's super fast, the screen is bright, clear and astounding. But I've run into two problems that are continuously causing me to scratch my head and go 'ERGH?'
  1. There's not enough room for apps.
    So I moved over from my hacked G1 to my hacked N1, and immediately found I couldn't port over all of the apps I had installed. This isn't a huge deal in the grand scope of things, it just means that I don't have the bazillion tower defence games on my device - now I only have a million or so ;-)
  2. Where's the goram frelling TAB key?!?!?!?!?!?!?!
    So apparently I use the TAB key a lot.  Like daily, in nearly everything I do
    1. When I want to write a grocery list I will most often open up a note of some sort add one item per line and when I get it at the store I tab it over so that it's not completely left justified anymore
    2. When I want to align text, in notes, in my calendar, anywhere
    3. When I'm using ssh - *ESPECIALLY* when I'm using ssh
So obviously 1 there isn't worrying me too much, Google has already said they will support running apps off the SD Card at some point in the future, so I'm not terribly worried.  As for 2 however - I'm astounded.  I can't be the only person who wants a keyboard with a tab key.  My G1 has one on the physical keyboard, so does the Motorola Droid/Milestone.  Yet there is no tab on the virtual keyboard, not as a shift character, not in the symbols, and not as an alternative in the symbols.  I mean seriously I get a character that looks like the new Netapp logo, but I don't get a tab key?!

So this started my adventure into the Android Market.  I've generally had good luck rummaging around in there (except when I'm in Canada and I want to buy an app, come on Google let people North of the Border buy apps!  It's either that or I use my proxy / vpn).  So I figured, hey the keyboard on Android is replaceable lets see what they've got.

One search for 'keyboard' later I noticed there are something like 600 (I counted) applications for keyboards!  I mean there's what, 30,000 apps on the Market and 2% of them are keyboards - that's pretty impressive... but wait a second, I thought to myself looking just a smidgen closer, some thing is amiss here.....

Most of these "Apps" are themes, or skins, or whatever they want to call them for a single application: Better keyboard.  To find what I was really after I had to wade through page after page, after page, after page of "Applications" to find the maybe 10 or so keyboards in the market to try and find one with a tab key ( For the record AnySoftKeyboard is what I finally found that had a tab key and was, more or less, usable).

This is when it dawned on me: the market idea, as it currently stands, is doomed.

I'm not being a naysayer here, in fact I really like the idea of the Market, or the Apple App store (for definitions of like and closed minded we can bounce your app if we don't like it for any reason, like it was sunny in California today).  The ideas are quite sound: give the user a one stop shopping place to find apps to make their device more awesome, slower and in need of being upgraded.  Great!  But here's the catch, for things like Better Keyboard they offer themes or skins or whatever they are, and each of these ends up taking up a slot in the market and thus becomes the problem: There's so much random stuff, I really don't care about, that I can't find what I'm looking for.

Why are there apps that show you pictures of attractive individuals?  Isn't that what the web is for?  Why isn't there a way for people to say "I'm a theme of Better Keyboard" and have all of those "Apps" collapse under the general heading of better keyboard?  Why must the Market be such a disastrous mess I can't even find what I'm looking for?

What we really need is better organization, for things like themes they should be more directly attached to the application, heck even a little sub-category for them all would work.  Click on Better Keyboard and you can install it and after the comments section there's a listing of all the themes for the application, which take you to their individual pages, etc.  But the entire thing only searches as a single entity when I search the market.

Dunno, I'm obviously frustrated at the market for making me wade through more "Apps" purporting to be keyboards than I have tower defence games, and I'm doubly frustrated 'cause all I really wanted was a tab key.  Are those really so much to ask for?

warthog9: Warthog9 (Default)
2010-01-30 03:55 pm
Entry tags:

Puppet - initial thoughts

So kernel.org has gotten up to the point where it's 10 some machines in size and scope with general plans to grow that number of machines as needed. Now at 10 machines copying around configuration files, directories, etc is a little unwieldy but it isn't so cumbersome as to really make me want to change things. Generally speaking our configurations are very static, we don't add new machines very often and our configurations don't change much over time.

That said we recently started work on deploying an 11th machine for a specific project, we are still feeling out the project so I can't say much about it yet, but I figured it was time to start trying to centralize the configuration for kernel.org into something a bit more centrally managed and controlled.

Why? Well designating certain machines as 'masters' of a configuration is fine, but it still means machines could diverge in ways I'm not expecting. It's also hard to explain to everyone else who has to work on these machines, and ultimately it's not really a good thing for long term sustainability. Plus I wanted to play with puppet on a larger scale.

So as of today I've got now 2 machines hooked up to puppet, our backend machine that deals with our dynamic web content (demeter) and our mystery project box (nott*). I've got the following things ported over:
  • smtp / mail
    • sendmail
    • greylisting
    • clamav
  • ntp
  • puppet (itself)
  • rsync
  • sudo
  • yum-updatesd
While I recognize this is not a particularly impressive list, this is what I've gotten through in only a couple of days worth of work, and our mail setup - is *NOT* simple (there's over 1M of configuration data to deal with that alone). I'm at the stage where I'm just taking our existing configuration files and copying them around, in a lot of cases this is likely how it will just be, but I can see where there would be some much more interesting setups using their templates and such. The example that immediately floats to mind is our configuration files for vsftpd - which don't allow us to store all the possible ports to listen on in the configuration file. It sucks, most everything else doesn't care, but vsftpd does.

That said I have been pondering something that I would genuinely love: the ability to include reported information from the clients in templates.

Lets say I have a list of machines that are connected to puppet:
  • machineA
  • machineB
  • machineC
  • machineD
  • machineE
Lets also say that I've applied the class 'greylist' to all of these machines. Now in a puppet template I can include information about that specific machine in the configuration file, say the IP address of the machine. This is very useful indeed, I can trivially tell them which interface to listen on and they are good to go. However what if I want to have all of the machines (nodes) that belong to class 'greylist' (or that have imported it) listed in the configuration file?

So far I can't find a way to do this, but it would be extremely useful and it might be something that I implement outside of puppet and do some more basic queries on my own, it's just annoying that this doesn't already exist inside puppet. That said I'll happily admit to being a n00b when it comes to puppet, and I fully expect to re-write the rules that I've already written in the near future (in fact I think one of the rules has already gone through 3 revisions). I'll likely post some more commentary on this as I get further, come up with something novel, etc.

* For those interested this is the Goddess of Night from Norse Mythology http://www.godchecker.com/pantheon/norse-mythology.php?deity=NOTT
warthog9: Warthog9 (Default)
2010-01-30 03:17 pm
Entry tags:

General Goals

So my general goal with the updated / new blog is to try and get a post going once a week or at least every other week.  Not entirely sure if it will all be personal, work related or what.  But going to try and get a blog post at least as often as I do my status reports.  Knowing how well my old blog went - this will likely fail ;-)

That said it's worth a try!
warthog9: Warthog9 (Default)
2010-01-17 12:33 am

Test post - to be deleted

Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Monkey Apple.
warthog9: Warthog9 (Default)
2009-07-24 12:00 am

(no subject)

So again, I'm not behind - go look at my twitter feeds they are moving. But that said let me jump in on why I'm going to rant now, and rant I will! But first the back story, last night I got an invite to go to the Google dinner for OSCon. Really LH is the one I should blame the 'you will be there!' with glares of death if I didn't say yes (was going to anyway) were a key part to this. It was a lovely evening, I caught up with a bunch of people, had some good food and at the end of the evening walked home with an Android G1 Dev phone. Score one for my side!

Now, if you have been hanging around with me at all for the last year or so, you would have heard me rant at some point about how I'm dead convinced that all the cell phones (new) on the market these days suck rocks. Now I'm being specific to the smartphones here, "normal" phones just work thankfully and I can't fault them in the slightest. Smartphones however, suck, plain and simple. Lets break this down some, shall we?

What options exist for smartphones:

  • Apple - iPhone

  • Access - PalmOS

  • Palm - WebOS

  • Google - Android

  • Symbian - Symbian

  • RIM - Blackberry

  • Microsoft - Windows Mobile

So realistically that's a solid list, there should be competition, ever increasing features, selection, and really all kinds of awesome there. However, there's basically several piles of steaming poo, a couple of antiquated offerings, and one or two things that only a corporate IT department could love. Really I'm mortified and angry, unsurprisingly since I'm moved to 'blog'. So while I come up on 24hrs with an Android phone let me get down in writing my experiences with the above and ponder what can be done.

Access - PalmOS I'll start with the old PalmOS, now owned by Access. I've been a Palm user for ages, I've had Palm's, Handsprings and my latest is a Treo 755p from Sprint (yes I'm on CDMA in the States. I understand this adds to my angry bitterness because CDMA is the unloved protocol, rarely gets the 'cool' phone and generally gets screwed. Why do I stay? I've got a good phone plan with Sprint and I'm reluctant to change it.) I've had the phone for two years now, and it's a frelling tank. I've dropped it down flights of concrete steps, water logged it, dropped it, thrown it, and by and large it's got very little damage for all of the abuse and daily battering it takes. The software stack is time tested and while quite antiquated at this point, does generally just work. The only really big gripe I could have with the PalmOS is the fact that the phone applications, interactions, etc are clearly giant shoehorned applications onto the side of the original PalmOS. It works, it's just clunky though, and by today's standards is anything but sexy.

Ultimately if the issue was that the PalmOS wasn't sexy, I would happily live on with the PalmOS and accept I am not hip or cool. However developers have fled from PalmOS, or really they fled 5 years ago when Palm (at the time) hadn't done anything resembling a major update to the OS in years and was working on it's "new improved Linux based OS!". There are still a few places putting out new or updated PalmOS apps but at this point it really feels like I'm standing in a ghost town when I pull out my Treo. But I'll also say that as of right now, it's the best smartphone platform on the market today.

Apple - iPhone As much as I want to hate the iPhone, it's lack of 'real' keyboard and all, it is probably the second best smartphone & smartphone OS out there right now. Why? Well it comes down to this: that's where all the development is. Apple, right now, is winning if by no other virtue than they have convinced the ravenous hordes of developers that their phone is the one to program for. My bank has a bunch of mobile apps to make depositing checks, and doing other things out. What platform do they support? The iPhone & iPod touch. Do they support anything else? Nope. Why? Not enough market share. Same goes for Skype, I would love to be able to use my phone as a wifi connected device I can throw in my back pocket and make Skype-to-Skype calls, however the only platform you can do that on is *drumroll please* the iPhone. Everyone else gets thrown into the "Skype Lite" category which doesn't use your wifi connection, should you have one, for the voice portion of the call.

So what does this mean? Apple is winning, and if they continue to have a dominance in the developer community and mindshare they will force every other platform out there to suck, and eventually faulter and possibly die. If that's not depressing as hell I don't know what is, I like the idea of an 'open' platform a la the old PalmOS, you could do basically anything on it, install anything you want, etc. With the Apple and the iPhone you can do sorta everything you want assuming you either jailbreak your phone (a constant cat and mouse game with Apple) or you bend to Apple's whims.

Palm - WebOS So after Palm sold off their antiquated OS to Access, they finally got their act together and got something that resembles a good phone stack out the door again. From what I've seen the WebOS is a very, very solid stack. They might have even been justified, for a time, in calling it a serious contender against the iPhone and launching it as an iPhone killer. Well that was until a week after the Palm Pre (the only, to date, WebOS enabled phone) launched, the latest iPhone came out and drowned out the Pre's existance. This particularly sucks because it's a CDMA phone on Sprint's network. Bonus points the hardware looks like it takes the same, built as a tank, stance that my Treo has. Really I should be practically giddy over this phone, and yet I'm not. I feel like it's an utter disappointment, a phone that *SHOULD* be perfect, have everything I could dream of, and I already know it's not what I want. Palm has taken a novel approach in WebOS and created what they call Synergy, or in lay terms - take all this data you have in various places like Google and Facebook and mash it all together so that it looks like a single contacts database, calendar, etc. It's neat, but it (currently) has one downside I can't condone: it only supports Facebook and Google as it's "cloud" storage medium. Me as someone who, shockingly, doesn't use these primarily is left completely out in the cold. There is not way to sync it to a CalDav or SyncML server that I run, there is nothing right now but what Palm approves of to use Synergy. Heck there's not even desktop syncing.

So if Palm would be kind enough to let those of us who *DON'T* trust our data on these "clouds" and let us use our own storage and accessing systems to handle our contacts, events, todos and notes I would probably jump on a Pre without question (Also if Sprint would stop being idiots and not force you to have a specific data plan to use a phone - LAME and DUMB!) WebOS and the Pre are still new, maybe I'll be proven wrong and what I want will come to it soon. I'm not going to hold my breath though, I'm getting convinced the entire cell phone market just hates me and all it's users.

Google - Android Here's another platform that has a huge amount of potential, possibly even more than the WebOS if you ask me, it promises a completely open phone stack that should be extensible, modifiable, and really just plain awesome to use. It has a lot of the hallmarks of the original PalmOS, I can even (and have) installed apps that aren't available in the store on the phone, even going so far as to install an entire additional appstore in parallel with the Android Market. This has serious potential, and since it won't be limited to a specific vendor (a la Palm & and the Pre/WebOS) there's room for it to grow in ways that Google (the main backers) weren't expecting or intended. This is *GOOD*!!!

However Android (as anything that isn't the iPhone these days) is really a second class citizen from an apps perspective. Skype doesn't have a client that's worth having, Facebook is having a serious spat at Google (so no nice and good Facebook app, like the one on the iPhone) and the development model for Android is a little cumbersome in comparison to the iPhone. Couple that with the whole thing feeling SLUGGISH in comparison to both my treo 755p (PalmOS) and the iPhone, and you have a platform that's got a black eye. It still suffers from the same problem as the WebOS & Synergy, in that it only syncs with Google, but I have more hope of fixing that than I do on the WebOS in the short term. Really if Android continues to feel slow and lack some of the pizzaz that the iPhone does, it will always be a second class citizen.

Symbian - Symbian Sadly I don't have a lot to say here, I haven't used a Symbian much ever (though I hear good things about it), and they seem to be well respected. It does have the same issues with Skype that Android does, and all of that but at least it doesn't seem to be embroiled in the same scuffles as everyone else. Sadly I can't say more than that.

Rim - Blackberry It's a company that wants to be hip and cool, and as those Mac vs. PC ads that Apple ran seem to indicate, it's a stodgy business suit trying to be hip and cool. I know a few people who use them, and love them, but really I think they are at the point of trying to do to much to stay afloat and not sure how to break out of the corporate & and push e-mail business.

Microsoft - Windows Mobile At this point, it's as antiquated as the PalmOS. It has been around long enough that software is plentiful and 'first class' so to speak, Skype being able to do Skype-to-Skype and such. But Microsoft needs to spend some time updating it and trying to make it better for today's market. Yeah it works fine, and businesses love it at this point - but if Apple pulls all the developers away that won't last forever.

*sigh* ok I've ranted myself out it seems. I'm really disappointed the more I look, the more I explore the more I get my hopes up, the more they get dashed. Well now that I've got a G1 I can at least try and help make it better in the hopes of succeeding but who knows.

warthog9: Warthog9 (Default)
2009-06-11 12:00 am

(no subject)

So I'm not nearly as behind as my blog would indicate, I've taken up a bit of twitter (both warthog9 and warty9 over there, the former is private and please don't be offended if I don't let you in to that circle), and I've come to the conclusion on it that's it's really just the 21st century version of a private IRC channel where I can vaguely control who see's what I say. You younguns and your fancy new technogadgetry *waves cane* you ain't done nothing new here, so get off my lawn!

Also, lightly succumbed to Facebook. I at least partially blame Terri for it, and I blame the rest of you hippsters out there practically forcing one to be on it. That said it isn't all horrible, if you know that nothing is private and the more apps you use the more distributed your personal data becomes it is an interesting experiment in social networking. Heck I'll admit to having a serious idea for an app on Facebook right now, spurred on by an idea of Terri's. We'll see if I go insane and actually do it! For as horrible as Facebook is (Discussed here, here, here, here, here to name just a few), it is not *ALL* bad, I was able to connect up with a couple of people I haven't heard from in years and see how they were doing and I now have a semi-decent way of keeping up with them. That said I still fear Facebook, Google, and heck anyone who wants to have access to all of my data. If they have my data, they control it - not me, and losing control of my personal data is not something I want to do personally, which means I'm really not a huge proponent of the 'cloud' as it currently exists.

But the cusp of that, and before I start ranting about how both the general populous, and especially younguns are idiots and give away their personal information without any regard or consideration for it, is that I've been making shorter more timely updates there where people might actually read it (I fully know, understand and accept that no one reads this - and I'm really quite ok with that). I'll consider integrating my public twitter feed here, but that would also mean I would have to change off of my ohhh so high tech blogging software of vim & editing raw files to put my comments up and actually use one of these new fangled things like wordpress. I'm not entirely convinced yet so that pipe dream may not yet come to fruition for all of you who want an RSS feed.

Been working my butt off on a pile of things, from papers for Linux Sysmposium and LinuxCon, getting various work things out the door (archive.kernel.org, working on GSoC w/ BKO, account request systems, and a pile of other things), been traveling a fair amount between Ottawa in February, and now Iowa (longest stay back in Iowa since I moved). Even took my first vacation since I last changed jobs, as Mom had declared "We are going to Disney World!" and we went! I've been working on getting photos up and dealt with, but I personally took 40 some Gigs of photos so you can imagine it's going to take me a while to sift through the over 3000 photos before I get those up. Had Tom & Amanda's wedding, I was the usher, and that was a blast and at the rate I'm going really looking forward to getting past September because I can stop being gone most of the time. I mean it's June, and of the 163 days that have passed this year I've spent 78 of them (to date) away from my semi stupidly expensive apartment. I like travel, but this is much busier than I would normally have expected. So far it looks like it's all worth it, but it's weird being this mobile.

Anyway it's getting late and I should crash. Just wanted to update, let people know I was alive and that I've had idea, after idea, of rantable material to blog about - just never seem to get around around or have the time to do it.