Sunday, July 24, 2011

The Week In Memes...

I think switch chips in wireless routers are going to be the bane of my existence.  I spent this weekend trying to document the Realtek RTL8366SR switching chip that is in my new Netgear WNDR3700 wireless router.  Basically I am documenting the chip's internal registers from the smattering of Open Source and datasheets that are related to the RTL8366SR.  Why am I doing this?  Well, I am not convinced that OpenWrt's driver for the chip is entirely correct. I found one or two issues with it already, and in looking at their documentation it is clear to me that they were also working from an extreme scarcity of documentation.  In fact, the OpenWrt folks didn't even realize that the chip is a RTL8366SR (they thought it was an 8366S, which has some significant differences).  What made me realize that we were dealing with that specific piece of hardware is that I can see from the traces on the PC board that the WAN port is connected to the switch chip, and it appeared that the second Ethernet port on the WNDR3700's SOC was connected to the switch chip.

It turns out that one of the features that the RTL8366SR is that it provides a 5th PHY (10/100/1000BaseT Ethernet port) that can be either used as a 5th switch port or connected to the CPU via a second RGMII port.  In this case, they route it to the second RGMII port (coming up as eth1 under Linux) which provides some measure of additional security and performance for the WAN port.  The first RGMII port on the switch chip is what the OpenWrt coders called the "CPU port" and is actually treated as the 6th port (p5 in zero-based nomenclature) on the switch.  The second RGMII port is ONLY available as an interface to port 5 (p4) if that port is configured out of the switch fabric (according to a preliminary spec sheet I was able to dig up on the Internet, for the exact purpose the router uses).

In any case, the driver developers didn't realize this and made a fair number of assumptions about the chip that are somewhat wrong.  My goal is to produce a document that accurately represents the chip registers like I would have gotten from the official Realtek datasheet if I could get my hands on one, and then fix the driver accordingly.  Despite some incorrect information in the driver source code, it was an excellent base for understanding enough about the chip to be able to extrapolate some of what I needed based on what I know.  I wonder how in the world they were able to get so much information without the actual datasheet.  Unfortunately, I am discovering that this router is suffering from the same lack of information than my previous router did.  Sigh.  Realtek and Broadcom:  WHY OH WHY do you require a non-disclosure agreement and be a "development partner" to get a datasheet?!  This makes no sense.  There is no IP in a datasheet - just the information that allows one to use the friggin' chip!

It has been hotter 'n hell down here in Texas for the past couple of months.  I don't mean like, "You live in Texas, you should expect it to be hot," kind of hot.  I mean day after day of 100 (or near-100) degree temperatures since May.  I think we had one day since then with any significant amount of rain (we had a sprinkle or two a few mornings back).  The drought is serious at this point where I anticipate some draconian water restrictions coming.

When it gets this hot outside, I really can't say I feel much like going anywhere outside of the air conditioned house.  Today I thought about going for a ride in the country in the car, but just driving around in this heat even with an air conditioned car seems kind of pointless.

I have been following the YouTube musings from a gal who calls herself "Bionic Dance" (her real name is Kate).  She has been trying to explain to people why babies are atheist.  This has prompted a series of responses from people who think etymology is the study of hearing one's self babble endlessly without thinking.  Kate has been trying to get people to understand that effectively people are born without a belief in a god, and therefore our default position in life is being atheist.  We must learn to have a belief in a god, and become a "theist" as that learning takes place.  The reason why is simple:  The word "atheist" means "without belief in a god" - in other words, not a theist.  This is important for two reasons:
  1. Getting people to understand that atheism is simply without belief in a god (and not satan worship or baby killers or some other such nonsense) is the first step toward all of us being able to coexist.
  2. It doesn't matter what you are talking about, if a person or thing or animal does not believe in a god for any reason whatsoever, then by definition they are atheist.  It doesn't have a negative, positive, or any other connotation.  It just is.  It's like saying, "I have blue eyes."  You know the definition of blue, and what eyes are, and when you look at the color of my eyes, you see blue.
The big controversy comes from religious people (and I think even a few dense atheists) who disagree with Kate that babies are born atheist.  Heaven help that when a baby comes out of a bible-thumping theist the little tyke is atheist.  But, really, there is nothing wrong with that.  As soon as the little spawn is old enough for same theist to indoctrinate him or her with their nonsense, the poor kid, not knowing that their parent is running on automatic, will probably believe it, and no longer be atheist.  However that isn't good enough.  That's because religious types want to believe that absent of any intervention that somehow their offspring will magically believe as they do, for to not do so would make them defective in some way.  Actually, the truth is that their offspring is more likely to believe in the monster under their bed or the evil monkey in the closet than their parents' god without the proper teaching.

The battle over religion vs. atheism seems to be coming to the forefront more and more lately, and more and more I see lots of misinformation being passed around (aside from the superstition making up religion itself).  Watching a show on PBS, I saw a country singer who recently "came out of the closet" declare that the USA is a "Christian nation."  Yet another person who paid no attention in history class and has never looked at the Constitution, or is simply spouting something they heard from someone else that they never verified.  It makes me wonder how long I can handle the ignorance and stupidity that is around me.  Yeah, folks, I'm not perfect and I don't know everything, but when I hear people fight with "Bionic Dance" over the meaning of "atheist" without a rational thought whatsoever, I have to wonder how in hell I ended up in this world.

For what it's worth, Kate, I'm with you!

(courtesy of memebase "Rage Builder")

In spite of all this, I've been listening to "K-Earth Classics" (oldies pop from the 1960s) from Los Angeles on my Roku, which is a great station.  I actually lost track of the time and need to get to sleep...since the alarm and my cat will be waking me up in the morning, and this time it really will be time for work!

Thursday, July 21, 2011

Is Open Source Headed In The Wrong Direction?

Over the time I began writing here, I have made many comments about Open Source projects and have, overall, been a fairly vocal advocate of Open Source Software.  I still feel that open standards and keeping a rich library of open source software is the only way that computer technology can continue to be innovative, particularly in light of the patent frenzy that looms over the computer science and technology community as a whole.  In my line of work I get to work with both commercial (closed-source) and community (open-source) software and, in general, I find that while the commercial software tends to be more polished, the same kinds of bugs and software quality issues exist.  The difference, generally, is that to get the commercial software fixed you first have to convince the company there is a bug, then convince them that the bug is worth their time to fix in a reasonable time period, and if you get that far you may actually see the fruits of your labor in a software update for little or no cost (generally, you end up having to buy a new version or a support contract).  Traditionally Open Source software had a quick turn-around on bug fixes due to the fact that you could, indeed, fix the bug, or someone among a large user community could usually do so.

There are some dirty little secrets, though, festering in the Open Source community that I feel compelled to reveal in the hope that we can reverse the trend.  Note that I am not an OO (Object-Oriented) type-of-person, so what you will see here references procedural software design.  However, I feel that these same issues apply (maybe even more so) to OO classes as they do to subroutine libraries.  That said...

Issue 1:  Poorly Documented Code

If you have a small utility program that does a specific task and is not generally embedded in some other program, then documenting your code is not critical (nice, but not critical).  When I refer to code documentation, I refer to subroutine/class libraries or major OS subsystems that are meant to interface with (or act as an interface for) other software.  What I am finding more and more is that automated utilities are being used to interpret meta-tags in comments and produce documentation, but this really isn't documentation.  These systems produce loads and loads of HTML documents and subroutine descriptions, but don't really show how the subroutine or subsystem is meant to be interfaced with.

The example I like to cite often is the Linux D-Bus system, which is an IPC (inter-process communication) system that is meant to allow various software subsystems to talk to each other.  It is meant to replace many IPC methods that are both incompatible and non-interoperable.  Search in Google for "dbus documentation" you may end up at several places.  The de-facto place to start is the site, where the D-Bus specifications are located.  While the introduction to D-Bus in the specifications say that it is "easy to use," I still find the entire description of it so incredibly complex and obtuse that I have still yet to understand how all the pieces fit together, what is going on inside, and most importantly, how and why I would use this rather than trying to roll my own (in actuality, I know why I wouldn't want to "roll my own" -- it's because I wouldn't want to write yet another non-interoperable IPC system!).  However, by the time you've read the third or fourth dot-separated interface definition example, your head begins to spin.  By the time I'm done reading, I don't really have a clue how to write a program that uses a D-Bus interface, nor do I have a really good idea what my responsibilities are in utilizing that system.  Okay, yeah, you've given me an API in several different languages, and maybe even written an example program, but I still don't know, both as a software developer and as a system administrator, what I have to do for the care and feeding of the D-Bus system.

I call this poor documentation not because the people involved didn't attempt to document the code, because they clearly did.  In fact, they wrote a lot.  What's bad is that it is not helpful to me.  It goes through lots and lots of pieces of history about the inner workings, but is not organized such that a system administrator can read a section and understand their responsibilities with respect to D-Bus, and an application developer can read a section and understand the interfaces they need to know with respect to D-Bus, and finally, if i want to hack on the internals of D-Bus, a deep investigation as to how it really works (complete with some diagrams, because pictures really do speak a thousand words).  Instead, all of these are probably documented somehow, but are so jumbled together that it is impossible for these three groups of users to truly understand the system as it applies to their role.

I am not picking on D-Bus alone.  This is the same for many projects - particularly in the Linux world, because of the rapid development taking place.  D-Bus is also not the worst offender either.  There are other libraries or OS interfaces which have little to no documentation at all, so your best bet is to grab source code and try to wrap your brain around what the authors were thinking when they wrote it.  That makes for a very elitist group, and seriously limits who can participate in development.

I remember when I was using the VAX/VMS and TOPS-10 operating systems and they had excellent documentation on the OS libraries and system services (TOPS-10 had the Monitor Calls Manual).  Here, you knew what the library call did, when you would want to use it, how to use it, and what data structures were required to be defined.  I think I have just dated myself...

PS:  OO programs and classes are not self-documenting.

Issue 2:  Unnecessary Complexity

I always laugh when I talk about SNMP - the Simple Network Management Protocol - because it is so far from simple as to nearly be an oxymoron.  I've been working with network equipment for years, and even network giants like Cisco can't implement SNMP correctly in their products.  In fact, I have never seen SNMP implemented entirely correctly anywhere.  The reason why is that while the protocol may be "simple" (ASN1 may be simple in theory but it is not simple to implement), the interfaces are so complex that nobody really implements it properly.  Anyone who has downloaded a manufacturer's MIBs and tries to run them through the ever-friendly (said rich in sarcasm) Net-SNMP (originally developed at Carnegie Mellon University) interpreter will notice that they end up with hundreds of error messages.

After looking at SNMP for a while you start to ask yourself, "Can't this be done any simpler?"

In defense of SNMP, I actually wonder if it can.  I mean, I certainly haven't come up with anything better, but I also haven't tried much either.  In any case, organizationally speaking, it is actually hard to figure out how to manage any network equipment with SNMP even after looking at the MIBs the device implements.  You'll typically end up having to join one OID (object identifier) in one "table" to get information about some OID in another table.  Cisco even makes things worse by requiring that you get information for different VLANs by using the community string in an undocumented way (for those interested, it's {community}@{VLAN}).  Sure, this is all documented in the MIBs, but trying to navigate them is a chore.  Cisco at least had a web-based tool to navigate their OIDs, but given Cisco's slow web site, it isn't pleasant to use either.

Some software simply grows organically in such a way that its interfaces or functionality becomes so complex that it would be better to re-think how it is done than to pile more and more functionality on the already complex framework.  I point to two other examples of software that has grown in this way:  two newer syslog daemons for LInux, and the latest Linux OS startup manager called "systemd."  These are two simple functions should really have (or retain) a simple interface, but are becoming more and more complex as time goes on.

One other casualty of excessive software complexity is that it becomes so difficult to use and/or configure properly that latent security holes form that can eventually be exploited.  While I have come to like sendmail, it is an early example of a software system that suffered from this kind of issue.

UNIX was originally created such  that simple software tools were developed as building blocks, and these simple tools were meant to be coupled together to form more complex systems.  While the problems we're now solving are more complex than these original tools were designed to handle, it seems that we've lost the basic principles that made UNIX a desirable operating system to use.

Issue 3:  Bloat / Scalability

I almost feel that the issues of bloat and scalability should be handled independently, but upon further thought I think they are so closely related that I am going to keep them together.

When I talk about bloat I am talking about a system that has become so large that it should be questioned whether the system should be broken into smaller, more manageable, pieces.  Most often, bloat occurs because the software (like what was described under excessive complexity) grows organically and eventually starts to do things that it wasn't originally meant to do.  At other times, it simply grows too big because it is trying to do everything all at once.  I have a few examples of this that I am going to pass-on, because my writing is also becoming kind of bloated as a result.

A cousin of bloat is scalability.  Scalability problems generally arise when someone writes code to solve a small problem, someone sees it and thinks it's a great idea, and uses it to solve a much bigger problem.  In Open Source software the biggest scalability problem I see can be categorically called memory abuses.  In so many software systems I see software that will casually read an entire configuration file, parse it, and keep it in memory.  This acceptable when the configuration file doesn't grow to be too big, as most configuration files are.  However, there are some configuration files that hold data that is probably better suited to be kept in a database of some kind, rather than being read into memory from a configuration file.  The Asterisk Open Source PBX contains many classic examples of such an abuse.  In addition to the application tying-up loads of memory holding copies of these configuration files, it also prevents any other applications from being able to modify the data outside the application in such a way that the multiple applications can work together.

Another example of a hidden and subtle memory abuse and scalability problem is OpenWrt's UCI (Unified Configuration Interface).   UCI works by taking a common configuration file format in multiple files in a single directory tree (/etc/config, in their case).  Applications wishing to use UCI use a library that effectively takes all the files in the directory, parses them, and converts them to a tree of dot-separated-key/value pairs.  The configuration language itself presents a scalability issue because its syntax is limited, and is expected to be used for virtually all OS configuration tasks.  The bigger scalability issue is that shell scripts that use UCI has an API that reads the entire configuration file tree into shell variables.  So if a shell script uses only a small portion of the UCI-based configuration, it must read, parse, and store all of the configuration files in memory.  In fact, every time the UCI library is used, the configuration files are effectively parsed, because the application never knows which configuration file it may be using, and the UCI system doesn't know if one of the configuration files were changed by another process.  As I started to understand more and more of what UCI was doing and how it worked, I asked myself, "Why in the world didn't they just use SQLite?"  Now, granted, there are some advantages to UCI, an important one being the ability to maintain temporary state by grouping temporary directories with the normal configuration directories.  Yes, I get that.  However, SQLite gives you the flexibility and scalability of a SQL-based database coupled with an efficient size that works well in embedded devices.  It was designed with this in mind.  State could be maintained in a temporary table, as an example.  OpenWrt has some fine conceptual design features but lacks sufficient scalability in many areas.

When Open Source software systems become more bloated and less scalable, it forces people to ask themselves, "Why don't I just cave and use Windows?  Or MacOS?"  Which leads me to the final issue...

Issue 4:  Bugs & Egos

Bugs are a fact of any software system, particularly those that become larger and more complex.  It is how the bugs are addressed where Open Source software projects are becoming more and more troublesome.  While it is true that having the source code means you can fix the bug yourself, where you can't in a commercial (closed-source) model, being able to actually fix the bug requires a particular level of expertise.  If you're dealing with issues 1 through 3 that I outlined above, that level of necessary expertise becomes less and less available, even to people who are experienced software developers.  In addition, if you find the bug, and are skilled enough to fix it, you'll ultimately want your hard work to be incorporated "upstream" into the next release of that software.  If you can't fix it yourself, then you need to report the bug upstream as well.

Now having worked on several software projects in my day, I realize there are frivolous and even incorrect bug reports and patch suggestions.  However, the developers on many Open Source projects have begun to have an inflated sense of their ability to both produce good code and bug-free code such that they frequently place such stringent demands on people reporting bugs as to discourage participation.  Without naming specific projects, I have submitted bugs with a high enough level of detail that the core development group should have been able to reproduce the bug without lots of additional detail.  However, they require so much additional detail that it frequently takes me longer to report the bug than it did to fix it (when I can).  Seriously, many projects simply outright refuse to acknowledge your bug report unless you provide massive amounts of debugging output and precise details on how to reproduce the bug.  Some bugs cannot be easily reproduced without being exercised in a specific environment that can't be trimmed down to a small sample case.  I have also had instances where some projects rejected my bug report simply by refusing to acknowledge the behavior as a bug, or by refusing to address the issue, generally asserting that the problem won't occur in a typical environment.  This is a more of a problem of ego than of failing software, and is becoming more and more common as more and more people utilize Open Source software.  I genuinely value people's time and understand that these projects are being run in a volunteer capacity, but if a project is to be taken seriously it can't be so inaccessible that only a few elite can be trusted to address problems in that software.

The other ego issue is code that is so badly written that, while it works, it is hardly maintainable or expandable.  Most software projects won't allow the code to be rewritten, or if they do, it won't be without a large amount of supporting evidence.  In many cases, what constitutes badly-written code is in the eyes of the programmer, but I've seen some utter crap in my travels that make you wonder how any computerized device in existence today actually works at all.  Again, without mentioning a specific project, there is a well-known Open Source developer that wrote some C code with two arrays.  That developer depended on the C compiler allocating space for the two arrays such that they were adjacent to each other, and proceeded to access part of the first array by providing a negative array index in the second array (editor's note: I believe it was actually worse than this, that this was being used to compensate for the case where the index became negative and the programmer was too lazy to address that case).  Not only is this blatantly bad coding practice, it was sufficiently non-portable that the code simply failed when used on a different operating system.  Happily, this specific issue has since been fixed...but what possesses a person to write code like this and assert it is correct?

Commercial software companies employ FUD (fear, uncertainty, and doubt) about Open Source software to convince people that using it is risky.  If, as Open Source advocates and project developers, we don't address some of the issues noted here, we are likely to be playing-into that FUD.  Larger Open Source projects depend on businesses using this software and supporting development through allowing employees to work on the software to make the software viable.  Further, without keeping enough interest and active participation in a project, there will not be enough continued development and support to keep the various projects going.  While I am, and will likely continue to be, a strong Open Source advocate, I am beginning to see these issues as an unraveling of some of what I admire about Open Source.  I understand that my criticisms here are likely to gnaw at some people, but at the same time I'm also hoping that these criticisms will cause some thought that will ultimately lead to better software.

Finally, I want to emphasize, again, that the specific projects I mention here have many positive points despite the fact I have not commented on them.  The reason I mentioned them here was because I was interested enough in the project that I wanted to learn more or wanted to participate.  If you're reading this and are part of one of these projects, understand that what I am saying here is meant as a means to make the project better...not to trash it.

Monday, July 11, 2011

How Government Finance Works

I have said many times that there 's "no such thing as a free lunch."  What I've neglected to do is explain how government finance works -- that is, how does government get the money to spend it on programs and so on.  Note that this is somewhat of a simplification, and I am not an expert on this subject, but I believe what I am saying here is true.  If anyone has any better first-hand knowledge, please pass it along!

Let's start with a simple example of how you would handle things in your own household.  Let's say you want to buy something - a big screen TV for example.  You have several options:
  1. If you have the cash, and it is not earmarked for some other expense, you simply pay for it.  You're done.  The other options below mean you don't have the cash available, so you need to get the money some other way.
  2. Charge it on your credit card.  Doing this means you get the TV, but you will be paying much more for the TV since you will need to pay back the credit card company with interest.  Paying minimum payments on your credit card means that the TV you purchased will cost more than twice as much as you paid.
  3. Borrow money from a family member.  Family generally doesn't charge as much (if any) interest, but not paying the amount back in a reasonable amount of time means that you will not be on good speaking terms any longer with your family (and they could sue you in court).
  4. Sell something of value you already own and don't use.  There's eBay,, and even pawn shops.
  5. Perhaps you live with someone and you could both pool resources and share the TV.  This doesn't happen too much anymore.
  6. Don't purchase the TV now, and save for it.
  7. Steal the money.  No, I don't recommend this option, and it really isn't an option, although people do it.
If you were fiscally-responsible, you would first think about whether the TV is something you really needed at this time, and consider whether it was necessary to finance it or simply wait until you had the money to pay for it.  A TV is typically a discretionary expense, meaning you can live without it (it is something you'd just like to have).

Likewise with something more important and much more expensive, like a house.  Most people have no other alternative than to take a mortgage (a loan) to buy a home, which tends to be not quite as discretionary (how large/small and expensive of a house is, though).  That money does come with a large cost, though.  A $100,000 house with a 10% down payment at 5% interest goes a little like this:  You'll need about $15,300 at closing, you will owe $90,000 and pay around $800/month (principal+interest+taxes+insurance).  After you pay the house off in 30 years, the $100,000 home you purchased will really have cost you $183,930.21.  Consider also that if you don't stay in the home for the full 30 years, be aware that the way compound interest works is that you end up paying more interest than principal at the beginning of the loan (in order to keep a consistent payment throughout the loan's lifetime).  I know this because I have a handy spreadsheet around for just this kind of thing.

Some of you already know all this, and I'm sorry I had to explain this in detail.  However, there are some people who miss the fine points of personal finance, which is a prerequisite to understanding municipal and federal finance.

In government finance there is income and expenses, just like in personal finance.  Unlike personal finance, government does not have the means to earn money (they could print money arbitrarily, but that would cause the money to become worthless).  If a government wants to build a bridge, for example, that costs $10,000,000 to make, it needs to do one of the following:
  1. If there are funds in reserve -- that is, money in the government's savings account -- and enough to build the bridge, then the government procures a contractor and construction begins.  Because people in government don't seem to understand finance too well, they don't have money in reserve these days...
  2. It can raise taxes proportional to the amount of the bridge in order to raise funding.  This is a problem, though, because the properties of inflation means that the bridge will likely cost more by the time enough money is raised through taxes to build the bridge.  Also, if the bridge is needed right away, waiting until enough tax revenue is raised may be too long.  Also, assuming that the bridge would be built using 1 year's worth of taxes, it is very likely that this would become an unreasonably large tax burden for most people.
  3. It can sell a bond to acquire the funds to build the bridge (doing this at the municipal level generally requires an election).  This is the usual way that governments obtain funding for a project.
  4. It can steal the money.  I don't recommend that government steal money either, but in some instances this actually does happen, sadly enough.
A bond is where the "national debt" comes from.  Remember "savings bonds?"  This is an example of a government bond.  The government sells bonds with a promise to pay back the principal and a certain amount of interest when the bond matures, or comes due for payment.  At that time, whoever holds the bond can cash it in and the government pays the holder (principal+interest).  In essence, the government is borrowing money from its own people, foreign interests, and so on, just like you would if you borrowed on a credit card or took out a mortgage.  Only in this case, whoever the bond holder is (maybe even you) is the one extending credit.

How does the government get the money to pay the holder of the bond?  They get that through levying taxes.  Generally bonds take 10 years or more to mature, so the principal and interest can be divided up over 10 years, so that taxes won't need to be raised so high that the cost of the bridge would represent an undue tax burden.  In short, the government takes out a loan for the bridge, and we, the people, pay back that loan.

What happens if the government can't pay on their debt?  Well, then, we're in a bit of a pickle, aren't we?  See, the government doesn't earn money, and they don't have stuff they can sell-off on eBay or at a pawn shop.  Worse still, the people who mismanaged the funds to begin with (our elected representatives) are generally long gone, or if not, they're not going to own-up to the fact that they screwed-up.  Or, maybe they haven't screwed-up, but perhaps something cost more than they expected.  Much more.  Either way, the government has to pay on their debt or they default on their loans and go bankrupt.  Since we, the people, are technically the government, that's not a good situation for us.

So what the government usually ends up doing is taking out a new loan to pay the other loans.  Since the government is typically a pretty good "customer," they usually don't have any trouble finding a new entity willing to loan them money.  Remember, though, that just like your own finances, if you pay one loan with another loan you not only pay interest on the original amount, but now you're paying interest on the outstanding balance of the loan plus the interest.  Over time and enough screw-ups, that comes out to a boatload of interest.  The other problem is that governments who have a reputation of being like Bernie Madoff really can't sell bonds after a while, if the prospective bond holders don't think the government can pay it back.  So at some point, funding though tax revenues does have to happen or eventually they'll be so "upside-down" on their loans they won't be able to get more!

This is the reason why there is no such thing as a free lunch.  Eventually, every government program or expenditure requires that we, the people, pay it back.

Senator John Cornyn was recently quoted as saying that 51% of American households pay no income tax.  Let me repeat that:  51% of households in the United States of America are not paying income tax.  When people start bellyaching about the rich not paying their fair share of taxes, I would like you to consider this number.  If the 51% of American households that didn't pay income taxes received no benefit from the programs and infrastructure that the government procured, then perhaps I would feel better about this.  However, what this really means is that 49% of us - that is, the rest of us - are paying for all the stuff that the government did and are doing for all of us that hasn't been paid for yet.  That figure is both astounding and distressing.  While I am in agreement that corporate tax loopholes need to be closed, I also feel that this is a good time to start considering a flat tax.  People cannot be asking for and consuming resources without paying for them, and that is exactly what is happening now.  Municipal governments are increasingly in crisis trying to pay their debts.  The U.S. federal debt is now over 10 trillion dollars, and has no end in sight.  In fact, this is what raising the debt ceiling is all about.  It's not just about what the federal government is spending, but really more about how much they've borrowed and how much they haven't yet paid back.

This is the reason why I tend toward being fiscally conservative (no, not Republican, I mean fiscally conservative, or even better, fiscally responsible).  As a society, we need to start understanding the cost of what we want, and carefully consider that cost against the benefit we receive.  If this were your own household, and you were at least half-way financially responsible, you would not keep spending and spending and increasing your debt while having no way to pay it back.  That is what the government has done, and with the financial climate as it is right now, the implications of our debtors asking for their money and government not having anyone to sell bonds to makes this even more dire.

I don't want to pay higher taxes, but even more so, I don't want to pay higher taxes while more and more people who consume more of the resources the government provides pay none.  Not only is that not fair, it is, in my opinion, criminal.  Yes, I agree that there are issues with health care and so on, but consider what will happen if you tax the doctors and health care providers right the heck out of this country.  Then what?  What happens when all the people who are innovating and making something for themselves leave the United States because they pay so much in taxes that their efforts are no longer fruitful?  I'm not sure you want to think about that.

Again, I am not in favor of "tax cuts for the wealthy," but I'm also not in favor of 51% paying no taxes.  I'm not in favor of someone with 3 kids, a big house, a big shiny new car, and all the crap they buy for themselves and their kids, telling me how they want need a tax cut and asking why the rich are getting a tax cut.  How they want this program and that program to be funded by "the government."  We don't have any more money, and we can't afford to be giving tax cuts to the people who use the stuff the government (really us) pays for.  We have to start being responsible, and if we don't, there soon won't be any government-funded programs and infrastructure.  At all.  Maybe not even a government.

Sunday, July 3, 2011

Bye WNR3500L -- Hello WNDR3700

Back in March of 2010 I posted comments about my experiences with the Netgear WNR3500L wireless router.  At the time, I had some very positive things to say, but this update is the result of some greater wisdom I have obtained through experience since then.  What I was trying to do is move my IPv6 tunnel and iptables/ip6tables firewall to the WNR3500L and use it as a router as well as a wireless access point and Ethernet switch (I have my own way of setting up a firewall thankyouverymuch, and don't really feel like adapting to what others force me to do).

The Real WNR3500L Story

Netgear markets the WNR3500L wireless router as an "Open Source Router" and implies that the hardware can be customized as desired (see this article at "my open router" (  This is not entirely true because neither Netgear nor Broadcom (the manufacturer of the chipset and reference design) have provided sufficient documentation or source code for the hardware.  In the case of the switching chip (BCM53115S), the only information available online is a marketing announcement, and nobody has been successful at obtaining technical information directly from Broadcom.  Having the proper configuration of this chip is essential to the correct operation of the router, particularly if it is to be used for anything greater than the software Netgear provides.

After literally hours of going through Google and any Broadcom/Netgear source code I could find (and stomach), I discovered several things:
  1. Broadcom effectively keeps most of the key device drivers for their wireless processor and switch chips as closed-source, despite what is claimed by Netgear (it isn't an "open source router" if the source isn't available).
  2. The person who wrote these drivers thought they were being clever by implementing a nvram-based configuration "registry" and then integrating these with the device drivers (who in their right mind puts userland type functionality into device drivers?!).
  3. Everyone I have seen who tried to do what I did came to the same conclusion:  that you could only get so far without having to do some serious reverse-engineering of the Broadcom drivers.
  4. In addition to point #2, Broadcom also embedded the same registry access in the CFE bootloader for the WNR3500L, and if you screw around with that registry too much, you can effectively "brick" your router.  No, the CFE sources for the WNR3500L aren't available.
It looks like Broadcom hired some students to write the embedded code for this router, since it has all the makings of an inexperienced programmer with clever ideas.  Don't get me wrong, I was one of those people at one time too, and maybe still do the same thing from time-to-time, but I know something that doesn't inter-operate with other code well and is implemented haphazardly when I see it, and this is it.

I can no longer recommend this router unless you're 100% happy with the original Netgear firmware.

The only reason that Netgear can still call this an "Open Source Router" is that they do release the Open Source code that was used in the software design of the router.  Looking at the code they release, it seems apparent to me that Netgear has attempted to keep their end of the bargain, but only provides binary stubs for much of the hardware drivers and utilities due to them being proprietary to Broadcom and other entities. 

What about DD-WRT?

Have you ever looked at the DD-WRT source code?  No, seriously, have you?  If you have any question about what I am about to say, do it and come back here and then read this...

While DD-WRT does boot and kind of work on the WNR3500L, it doesn't work well.  To those who have put the effort into this project, I feel your pain, and I appreciate the hard work you did, but bottom line is that getting something to work and getting to work well are two different things.  Every single time I tried to use DD-WRT on the WNR3500L the switching functions seemed to work poorly (it would drop packets randomly on my MythTV systems so video would randomly pixelate).  This didn't happen on the factory firmware from Netgear.

To make a long story short, the people at DD-WRT basically appeared to take the mantra, "If you can't beat 'em, then join 'em," and proceeded to work around the weird registry-based tangle of userland/kernel code by just accepting and using it too, since they had no other choice but use many of the Broadcom binary-only device drivers.  However, the problem is that Broadcom's do-it-all userland router service ("acos") pokes some funky values into the switching chip before and after the driver is loaded and I don't believe that DD-WRT is doing that right.  What are they not doing right?  Hell if I know.  I couldn't follow the DD-WRT source code for my life.  I couldn't tell what parts were appropriate for the WNR3500L and what was for other routers...which were binary-only (proprietary) and which were things I could look at and change.  There were pieces of code clearly marked as Broadcom proprietary that I'm not really sure was current for the router.  It is a bloody mess.  No real offense to the DD-WRT folks, because what they've done overall is pretty impressive, but that source code organization requires some serious drugs to understand!  Since I'm staying away from those, that pretty much put an end to my idea of fixing DD-WRT on this platform.

By this point, I decided that the only thing the WNR3500L was good for was exactly how it was sold, with the original Netgear firmware.  If that's what you want, it's a nice router.  If it isn't, then you'll do what I do, and give it to someone else who has a need for a router like this.

Discovering OpenWrt

During my frustration with DD-WRT and trying to decipher the ungodly source code that Netgear provided, I started looking seriously again at OpenWrt.  OpenWrt on the surface is another open source firmware alternative for common wireless routers.  My first experiences with it were not too good, and so I kind of wrote them off and forgot about it.  However, my second look uncovered a real gold mine of technical wizardry and a lot of people who really seemed to understand what they were doing.  Now granted, they and I may disagree on some implementation issues, but the difference between OpenWrt and DD-WRT is that the OpenWrt folks basically give you serious tools for customizing things the way you want.  Unlike DD-WRT, the source code has the flavor of the FreeBSD ports system.  It is mostly a well-organized and very logical embedded systems development environment. After looking at OpenWrt for a while, I started to see that this was more than just a tool for improving consumer-grade wireless routers, but could definitely be the basis for other embedded design projects.

One good OpenWrt tool is their wiki.  I don't usually like wikis because they're organized horribly and using "search" comes with the same problems as any other search engine (you end up with hundreds of results and never the one you're looking for).  The OpenWrt wiki is mostly different in that it does have some organization to it, and the answers are mostly there.  Software/hardware developers don't like to write documentation, and it was clear from the OpenWrt docs that this is no exception.  However, it does look like the OpenWrt people are trying, in good faith, to get some stuff documented.  In particular, their supported devices list is extremely comprehensive with photos of the inside of hardware, pros and cons, and everything in between.  I really liked this, and it was where I started my search for a new wireless router.

Enter Netgear WNDR3700

After a lot of thought and looking through the OpenWrt hardware list, I decided to purchase the Netgear WNDR3700 wireless router.  Frankly, I couldn't explain the low-level hardware much better than the OpenWrt people, so you can see that at  A higher-level feature list is:
  • Dual-band wireless "N" (2.4 GHz and 5 GHz)  [note that I don't think I have any 5 GHz stuff, but it is nice to have available]
  • 4-port Gigabit Switch (like the WNR3500L)
  • Gigabit WAN port (also like the WNR3500L).  However, the WAN port goes directly to the CPU and not to the switching chip, so it should not be used as a pass-thru bridged port unless you're willing to have the CPU as a choke-point!
  • USB port (for external storage)
  • Chipset is Atheros (CPU/wireless) and Realtek (switch) based -- No more problems with it being Broadcom-proprietary.
  • Around 6W power usage (as measured by my Kill-a-Watt)!  WOW!
The case is kind of hokey in some ways, and the stock antennas are a joke.  However, there are a few modifications that people have documented to replace the antennas.  At the same time I make fun of the hokey antennas, I will also say that (at least for 2.4 GHz) the range is comparable to anything else I've used.  So I'm not sure it's worth complaining about.

Also interesting is that the "stock" Netgear firmware is actually a modified older version of OpenWrt.  The user interface is a typical Netgear-branded interface very typical of their other products, but underneath was OpenWrt.  Cool.  I never really did anything with the original firmware before going to a newer release of OpenWrt.

WNDR3700 Bad Things

Now for the bad and ugly...  The older WNDR3700 had issues with the 2.4 GHz wireless radio that would simply stop working after a while.  The newer WNDR3700 (marked on the side of the box as WNDR3700v2) doesn't have that problem, but also isn't supported by OpenWrt except at the development release.  No big deal, since it seems to work OK.  Except...

There is something wrong with the open source access point software hostapd.  This daemon handles the access point operations, such as radio frequency/channel adjustment, linking/unlinking from clients, and most importantly, handling the WEP/WPA/WPA2 encryption/decryption.  It works mostly, except that my wireless camera no longer works with it.  In debugging mode, after the device is mostly linked-up, I get the message "WPA: received EAPOL-Key 2/2 Group with unexpected replay counter" and I cannot communicate with the camera (even though it appears to be connected).

After hours of work, I think I know what is wrong with hostapd, but am not able to fix it.  Basically, my camera's configuration has a single check-box to enable WPA/WPA2 and a place to enter the pre-shared key (PSK).  When I reconfigured my router to accept WPA only or WPA+WPA2 (I think), the camera linked to the router just fine, but not when I enabled WPA2 only.  It looks like the hostapd software is either interpreting the WPA2 specification too literally and not allowing for some poetic license when it comes to the spec, or...and this is more likely...that the camera is trying WPA and WPA2 kind of at the same time, and confuses hostapd.  The problem is that it works with any other wireless router I could throw at it, so something is clearly buggy with this.  Some people are accusing the Atheros chipset, but I suspect that it is more likely a weird bug in hostapd.  It, too, has source code that is convoluted (to say the least) like just about every piece of software that implements cryptography.  I have been trying to understand what is wrong, and am still doing so.  However, this kind of complicates things a bit since I would really like to have my camera work yet!

Unfortunately, I didn't try the original Netgear software to see if that also had trouble, being based on OpenWrt.

All this being said, I still think that the router was a good purchase so far.  We'll see what happens when I start throwing more complicated stuff at it!

Why Not A Dedicated Router?

I was faced with a dilemma when I started this project:  Should I try to make better use of my wireless router hardware, or just buy a small, low-power computer like the Dreamplug from Globalscale Technologies?  This would have been easier in the short-term because I could have kept the wireless issues and routing issues separate.  However, in the longer term, I didn't like the idea of putting another underutilized power-using (albeit low-power) device on the network, and having to maintain yet another OS/platform.  In addition, the Dreamplug (with console/JTAG adapter) is around $200 with shipping, and I really couldn't justify spending another $200 for a system that would just sit on the network routing/firewalling packets.  On the other hand, my time has some value too.

In the end, I decided on replacing the wireless router because I figured that the long-term issues I had with a separate device outweighed what I could learn by implementing a router on a less-expensive hardware platform.  In the end, I could use these low-cost, low-power routers as cheap embedded controllers for a number of applications (and at work, we have some situations like this).  I will feel much more satisfied with my decision after solving the wireless camera vs. hostapd problem.

Final Words For Broadcom/Netgear

What would be the motivation for Broadcom or Netgear to make hardware information open?  Well, there are a lot of computer enthusiasts out here who are not hardware designers that would love to buy these products, take them apart (so to speak), and make them do things beyond what the average consumer would want.  We want to implement every last function that the device is capable of.  In addition, we're writing the software for free, and making it available for everyone to use.  That's functionality that Broadcom and Netgear can leverage for future product designs or more enhanced firmware features.  Nobody really loses here.