top of page

"But what if we didn't have to do it that way?"

Kate May, Project Manager at PPI gives us her insight on Lean processes and the improvement that comes from working together.


“But what if we didn’t have to do it that way?” This question – a somewhat subversive

one to ask of public servants – is what lured me into the movement of lean process

improvement.


Kate May, PPI Project Manager

I was working as a budget analyst at the time – my first job out of grad school. In school, my most exciting coursework had involved putting programs and policies to the test to understand what actually worked operationally in government and why. But I had found in the real world of local government, while there were semi-regular measurements of key performance indicators (usually tied to annual budget allocations), there were few opportunities for actual data-driven experimentation. That was until I stumbled into the disruptive world of the Denver Peak Academy Team.


The Peak Team were the process improvement counterparts to the financial analyst

team within Denver’s Budget Office. Under Mayor Hancock, the team was created to

empower the staff doing the work of government operations to measure their core

aspects of their processes to find and remove waste, with the goal to save the city

money and cut down on the mutual frustrations for both staff and stakeholders when

government experiences don’t work out the way they should.


The first process improvement training I took hooked me on Peak’s approach to measuring what is often considered the minutia of government and finding ways to engineer more value into the experience for the resident. About six months after my first process

improvement class, I shifted over to work on the Peak Team as the Senior Operational

Data Scientist, where I got to build data models, create operational dashboards, and

design and carry out controlled trials to test hypotheses about how teams might be able

to improve outcomes by changing their approach.


My favorite experiment we ran was one that ended up disproving the hypothesis we set

out to test. The team responsible for parking enforcement wanted to reduce the number

of people illegally parking near the football stadium during games; at that time,

hundreds of cars per game were getting towed to the City’s impound lot due to non-

compliance with the posted signs (which existed to ensure safe emergency response to

the stadium and surrounding area). The Parking Team believed that the cause must

have been the City’s signage: that visitors found them either hard to read or missed

them entirely when looking for parking options.



Parking was considering investing in new high-tech signage, which carried a weighty

price tag. But they wanted to validate that it really was a lack of information / clarity that

caused the problem. To run this test, we broke the area surrounding the stadium into

different zones: some kept the status quo, and others got new low-cost signage with

clearer language about parking and with a number that people could call or text to learn

whether it was legal to park on that day and time, as well as a URL they could visit to

read the same information.


We used statistical techniques to ensure that the treatment and control areas were equal in expectation, based on observable factors in our datasets, so that the experimental outcomes would be measuring the impact of the new signage rather than something inherently different about one place vs. another.


After a few months of testing, we found that there was no difference between the

ticketing and towing across our treatment and control regions. While people were using

the informational resources to understand the legality of their choices, they were

carrying out the same behaviors regardless of that information. Their desire to park and

get to the game outweighed the consequences of breaking the law. To change this

behavior, signs weren’t the answer, and we’d need to test out other strategies to reduce

illegal parking.



While this effort failed to achieve the result we hoped for, it succeeded in advancing our

understanding of the problem – and it prevented us from scaling up a strategy that was

unlikely to succeed. Not all process improvement involves long testing trials, but all

innovation takes the courage to state our beliefs out loud: if we do things in X way, we

believe it will result in Y effect, because of Z. We are vulnerable to being proven wrong.

But even when we fail at predicting outcomes, we win the knowledge that comes with

good faith attempts at improvement. And the more we share the knowledge of our wins

and losses, the more we all grow together.



Denver Skyline -"Photo by Nils Huenerfuerst on Unsplash"


That’s why I love this work.


Kate May is a Project Manager and data and technology SME for PPI. In her day job, she works as a Solution Consultant for Granicus, a government technology company. In addition to working for the City and County of Denver, Kate served the City of Rochester, NY as IT Operations Director and Chief Performance Officer. She lives on a working farm in Frederick

County, MD.






7 views0 comments

Comments


bottom of page