The way we quantify how well our roads are (or aren’t) working isn’t something that tends to get a whole lot of play outside of the transportation wonkery, but it has a drastic effect on policies and livability. The most commonly used metrics to describe our system, including the infamous level-of-service metric, are drawn from something called the Highway Capacity Manual. See how the name of the manual doesn’t really imply that it’d be terribly useful for designing safe, welcoming local streets? Most jurisdictions don’t, and that is one of the main reasons why too many urban streets have become de facto highways.
Over the last several years, traffic engineers have increasingly been letting go of their long-held fondness for LOS and other traditional performance metrics, and in many cases are even leading the charge against them. The inadequacy of LOS as a primary measure of performance is perhaps most clear in California, where the state’s all-powerful Environmental Quality Act buttressed the importance of the metric by (ironically) requiring that environmental analyses consider LOS when evaluating the impacts of a project. So perhaps it’s not surprising that California has struck the most major blow to date to LOS, with new guidelines that evaluate projects based not on how much they will increase auto delay but instead on how much vehicular traffic they induce.
It’s hard to overstate how radically that this departs from the status quo. For many jurisdictions, an over-reliance on capacity-based metrics have produced policies that favor anything capable of moving one more car. California’s new standards appear to turn this idea on its head, favoring policies and land uses that create one fewer trip (or one fewer vehicular mile traveled). Thus, it would appear that analyzing a particular idea with California’s VMT-based methodology—whether to widen an intersection approach to include a turning lane, for example—might lead to the opposite conclusion as analyzing it with traditional metrics. Though the turning lane would certainly reduce delays and thus improve LOS, there’s plenty of evidence to suggest that it would also induce new traffic and thus be undesirable (or impermissible, even?) based on the VMT-based metric. That’s huge!
The VMT-based metric is neither perfect nor wholly complete. Success of the methodology relies heavily upon our ability to estimate the number of trips a project may generate which, as I described here, is something of an inexact science. The need to include trip length in these projections serves to widen the gulf between the data that’s needed and the data that’s available. And the state’s guidance [pdf] for utilizing the new methodologies do not appear to significantly improve upon the half-hearted methodologies engineers currently employ to evaluate safety ramifications of a given project. Finally, a scathing white paper [pdf] from UCLA’s School of Public Affairs suggests that the VMT-based methodology may not even be all that great at its purported goal of teasing out the environmental impacts of a project, although I’d hasten to challenge some of the assumptions their analysis is based upon.
Despite its shortcomings, the new VMT-based methodologies represent a big step forward and I’ll be curious to see how they’re applied by colleagues in California. Those of us who favor a multi-modal and safety-oriented approach are regularly stymied by traditional metrics that concern themselves with only capacity and delay, only as they pertain to autos, and only during the busiest 1% of the day. Though it leaves important considerations unaddressed, California’s new methodologies offer a way to overcome these hurdles. Time will tell what effects this will have, but there’s plenty of reason for optimism.
So what does this mean for Portland? That’s a good question. It’s now been two years since the city launched a project to update our performance standards, but sadly this effort seems to have disappeared into the same memory hole as bike share. Keeping with a storyline that’s becoming too familiar, others innovate while Portland waits.
2 responses to “California’s New Performance Metrics & Getting What You Measure”
What a great post. I’ve been following and informing the development of this issue in California since 2009, and was one of many who recommended that VMT per capita be used as a metric. There are issues with any metric — not just what is measured, but how it is measured, and the assumptions underlying the model used to generate the results. However, in general, reductions in VMT/capita indicate patterns that are beneficial for other modes and for more compact land use, whereas increases in VMT/capita indicate trends towards auto-oriented patterns.
Could it work in Portland? Sure, but the tools used to measure it would need to likely be advanced beyond those currently used by the City. There are many, many factors that influence VMT, including regional location, local land use, the FAR of nearby retail, intersection and street geometry, safety improvements, speeds, congestion, income… so a model must be sensitive to the actual changes in those variables that are proposed as a part of each project in order to show results that folks can have confidence in. A traditional four-step travel model, for instance, can be horribly insensitive to this stuff. I remember one instance where such a model, when given a scenario with lots of TOD around a new commuter rail station, actually showed negative transit ridership. It’s not that TOD wouldn’t have produced commuter rail riders in that situation — the model just wasn’t sensitive to that stuff. So, it’s important to use the right models that are sensitive to the things being proposed.
Interesting article. I suggest we ask ourselves a big question: why is this poor methodology tolerated for transit planning? I would offer two ideas.
1. The problem with bad transit methodology is part of a bigger picture. Portland makes its decisions based on politics and who is a big campaign donor. The areas I am familiar with (the surveys the city uses) are generally risible because they violate every standard protocol out there. The city calls a push poll “citizen involvement.” What a crock.
2. The city has no interest in improving the surveys, the methodology you discuss or any other data collection. Why- because, in general, I see a city with its fingers in its ears.
Brian & I disagree in values, but I whole-heartedly agree that we need good data, up to date methodologies and logic in all areas of planning. I, personally, try and handle the truth. I trust that Brian will change his thinking if new data emerges. What concerns me is that city hall doesn’t care about facts and accurate gauges of public opinion because neither enter into the real sausage making of policy.
Mr. Smith- you can improve BPS data collection by immediately banning voodoo crap like
“visioning sessions.” A smart PSU student on Survey Monkey would be vast improvement over the mangled surveys BPS pays for. Metro and other agencies are bad- but BPS takes the ridiculous methodology cake.
C’mon- how expensive is it to use an up-to-date methodology? How hard is it to implement Brian’s idea?