The Company
Philosophy
NeuMod Labs (NML) was founded to develop and deploy data infrastructure for the design, analysis, compliance, and operation of the built environment. NML was founded with the goal of taking a fresh look at the Architecture-Engineering-Construction (AEC) industry and to provide cutting-edge data, automation, and analytics solutions for the sector-wide decarbonization of buildings.
NML’s leadership is applying its 25+ years of combined experience in developing pragmatic solutions to achieving targeted decarbonization goals, by strategically prototyping and deploying the next generation of sophisticated calculation methods and novel technology solutions that foster industrywide collaboration, with a focus on market differentiation. The team has spent years researching and developing innovative data science pipelines and workflows to design foundational technologies for data preparation, model training and tuning, and the deployment of targeted data-driven technologies.
NML leadership is a firm believer in the philosophy of first principles so our typical approach involves deconstructing the problem into its constituent elements, and re-engineering a solution ground-up with simplicity and modern technology at its core. Taking lessons learned from a combined 25+ years of industry experience, reputable thought leadership, and the application of rigorous analytical methods acquired in academia and research, NML is developing cutting-edge data and analytics solutions for the AEC industry.
Mission
Our mission is to facilitate an equitable, pragmatic, and cost-effective decarbonization of the buildings sector to help mitigate the worst effects of climate change.
Certification
NML is a certified small business for federal awards and is registered with SAM.gov.
Our Services
Low Carbon Buildings
Decarbonization is a major focus of the buildings industry and is gaining momentum daily in a variety of aspects from energy use, refrigerant and methane leaks, and embodied carbon to name a few. We conduct deep dive decarbonization studies focused on leveraging interval data from building management system to inform detailed analyses. NML leverages its background in building physics, sustainability, engineering design, and data analysis to determine realistic targets and pathways for decarbonization. The team specializes in projects with complex systems, indoor environmental quality requirements, and/or façade challenges, and often the goal is to achieve maximum benefit while minimizing disruption to ongoing operations.
Buildings Research
NML is closely collaborating with its vast network of industry partners in leading innovative buildings research to advance the goal of cross-sector decarbonization to build and foster resilient communities that are, or will be most affected by climate change. NML and its partners continually identify high-importance R&D areas to work with government, code jurisdictions, trade associations, professional organizations, private businesses, and others to solve this problem with the alacrity that it deserves.
Software & Products
The buildings sector is a highly fragmented market so project delivery workflows are extremely siloed, thus reducing process efficiency, stymieing innovation, and impeding the adoption of advanced technologies. The NML team can help AEC firms gain a competitive edge over their peers by re-engineering their workflows for cost-effective data-driven design, consulting, and automated reporting. It can help turn large data lakes into structed data warehouses for firms to derive automated intelligent insights using completely tool-agnostics data exchange. Standardized data ingestion, transformation, and validation can greatly reduce production costs by encouraging internal and external collaboration and cutting down duplicated effort.
NML’s experience includes typical delivery software such as Autodesk REVIT, Autodesk CAD, SketchUp, Trane TRACE 700, eQUEST, EnergyPlus, OpenStudio, TRNSYS, HAP, ComCheck, and others.
Training & Expert Reviews
NML provides training services to a variety of audiences in buildings and related areas for – Decarbonization, Building Performance Modeling, Data Engineering and Data Modeling, Building Codes and Standards, Programming for Automation, and others. In addition, NML is routinely invited to lend its expertise to review draft standards, funding proposals, and engineering design documents. NML also provides quality assurance and expert legal reviews.
Founders

Sagar Rao
Sagar is a renowned expert in the field of Computational Analysis for the Built Environment. His unique background covers building physics, data engineering, and software development. As a Building Performance Consultant, he has successfully delivered over 50 high performance buildings for a clientele that includes federal agencies, state authorities, prominent healthcare systems, renowned universities, financial institutions, and national laboratories. Sagar provides leadership to several ASHRAE, International Building Performance Simulation Association (IBPSA), and Illuminating Engineering Society (IES) committees. He has directly contributed to the development of several building codes, standards, and government regulations, including the US Code of Federal Regulations.
Expertise
Building Codes and Standards Building Performance Modeling Research Climate Modeling for the Built Environment Project Delivery Automation Data Engineering Analytics and Insights Generation
Fred Betz
Fred is a sought-after expert in complex facilities decarbonization and electrification. Fred is a mechanical engineer by training with a PhD in Building Performance and Diagnostics from Carnegie Mellon University. He has 14 years of experience in mechanical engineering and building performance consulting focused on climate responsive design in facilities such as hospitals, laboratories, data centers, and corporate spaces. Fred has published more than twenty peer reviewed articles on topics including: decarbonization, water efficiency and reuse, climate-precipitation, HVAC technologies, and infection prevention related to healthcare facility HVAC. Fred also serves on several industry and trade association committees within ASHRAE, ASHE, and the U.S. EPA.
Expertise
Building Performance Consulting Decarbonization Studies Quality Assurance Review Ventilation and Infection Prevention Building Technology Research, Training

Key Contributors

Satish Pethkar
Full-stack Web Developer
Satish has over 11 years of experience developing desktop and web application for a variety of industries that include – banking, workplace productivity, healthcare, and media streaming. His past projects include working on technologies such as ReactJS, NodeJS, Angular, MySQL, MongoDB, Django, as well as a variety of web visualization packages. He specializes in deploying software platforms using popular cloud services such as AWS and Azure. He has most recently been developing Electron-based desktop applications. Satish has made significant contributions to web deploying NML’s Project Jarvis.
Parag Rastogi
Climate Change Modeling Expert
Parag did his PhD with the LIPID group in the School of Architecture, Civil, and Environmental Engineering at the Ecole Polytechnique Federale de Lausanne (EPFL) in Lausanne, Switzerland. His research was on the assessment of, and methods to address, uncertainty in building performance simulation arising from weather data and the thermo-physical properties of materials. He is passionate about healthy buildings, infrastructure, and climate. When not working on challenging trans-disciplinary problems in indoor environmental quality and building physics, he is to be found reading or writing about air quality, energy, development, the environment, and policy. Parag is a close collaborator in NML’s research to develop high-quality stochastic predictive climate files for the built environment.

Blogs
Hear Us Out
- Economic Analysis for Building Performance Modeling
- To Model, or Not To Model
- Building Performance Documentation
- Less Talk, More Development
- There can be Only One Energy Code Compliance Path
- ASHRAE Standard 209 in the Real World - Part 2
- ASHRAE Standard 209 in the Real World - Part 1
- Savings, Conservation, and Efficiency
- A script stole my job, but I am ok with it.
- Performance Modeling Saves Money
- Annoyances become roadblocks with scale
- More on the Open-Source BPM Library
- There is a standard for that?
- Excuse me; but I do not speak your language
- Never Ask a Spreadsheet to do an Energy Modeler's Job
- Uncertainty 101
- The Hunt for the Right Information
- Benchmarks, Benchmarks, Everywhere
- Transparency
- Open-source BPM Library
- What Practitioners Need
- The Current State of Affairs
Economic Analysis for Building Performance Modeling
2021-04-25
Contributor
NML
A very common tool for making decisions informed by energy analysis is the payback analysis. The payback analysis can take on several shapes and levels of complexity, but unfortunately many people use inappropriate terminology or mix and match terms that can’t be mixed and matched. I could send you to this 41 page paper from the U.S. Department of Energy, https://www1.eere.energy.gov/buildings/appliance_standards/pdfs/ashrae_final_rule_tsd_06_lcc_pbp_2012_05_01.pdf, but I thought I’d boil it down to approximately one page.
The reason I raise this is to help clarify expectations between modelers and users of the results data. One of my readers recently pointed a co-worker to a blog I wrote to explain the difference between energy code compliance paths. I’m ecstatic that these blogs are a) being read, and b) being shared with others to provide people insight into the often-opaque world of building performance modeling.
I can’t recall how many times someone has asked me for a life cycle cost analysis (LCCA) and ended up wanting a simple payback analysis. The primary differences are cost and accuracy for calculating an answer. Simple payback may take an hour to calculate whereas an LCCA could take days to collect and enter the necessary information.
More often than not; people want simple payback analysis. That is the incremental cost add of an enhancement divided by the reduced annual operating cost. The simple payback analysis is a perfectly valid approach for many instances, but it has limitations. For example; a good application of simple payback analysis is a standard water-cooled chiller vs. a more efficient water-cooled chiller. Both products have essentially the same components, life expectancy, and maintenance requirements. Therefore; the differences are only the first cost and the operating cost (energy and water via the cooling tower). The results are often expressed in years as the operating cost differences are often measured in annual savings.
Life cycle cost analysis builds upon simple payback and includes additional factors that quantify differences between two options. Finally, it includes the time value of money and is calculated as net present value. For more on net present value please go here: https://hbr.org/2014/11/a-refresher-on-net-present-value
The real value of LCCA is when multiple cost differences exist between two or more options. The differences could be in utilities, maintenance, equipment life, etc. Let’s return the chiller example, but in this case compare an air-cooled chiller vs. a water-cooled chiller. Key differences include:
-lower first cost for the air-cooled chiller,
-higher annual electricity cost for the air-cooled chiller [this is not always the case, but is often true],
-lower annual water cost for the air-cooled chiller
-lower annual maintenance cost for the air-cooled chiller [chemical treatment and cooling tower maintenance]
-shorter life span for the air-cooled chiller causing more frequent replacement costs [see ASHRAE Equipment Life Expectancy Chart, https://www.excelcoservices.com/wp-content/uploads/2017/06/ASHRAE-EOL-Chart.pdf]
The key differences that are being quantified here are the varying escalation rates between electricity and water as well as the time value of money between having to replace the air-cooled chiller sooner and more frequently than the water-cooled chiller.
The time frame of the analysis can be variable, but generally should be set to a multiple of the equipment life or to the building life. For example; in a 50-year building the water-cooled chiller plant may be replaced once whereas the air-cooled chiller will be replaced twice.
One last example is the hybrid approach of simple payback and LCCA. Sometimes additional factors such as annual maintenance are added to a simple payback if there’s a substantial difference. Again, this can be done, but use caution as it’s not necessarily a complete picture. For example; if there’s a substantial difference in labor, and labor escalation rates are significant; then the time value of money can change the results.
Some helpful hints:
-BLCC 5 is a tool developed by the National Institutes of Standards and Technology (https://www.nist.gov/services-resources/software/building-life-cycle-cost-programs) to complete these analyses, but many folks create their own. If you create your own tool that is likely more user-friendly make sure you get the same results and use the same escalation rates. The generally accepted accounting principles for LCCA are not arbitrary.
-For those LCCA’s that exceed the 30 years within the EIA/FEMP energy escalation tables. The 30th year escalation rate is just repeated for each year so that’s why you often see flat lines 30+ years out on an LCCA.
-Inflation is not included in an LCCA because it’s not different between the options. If it’s not different, then it just cancels out. Discount rates and escalation rates are not inflation.
Hope this explanation is clear and brief enough to be helpful so we can move forward together.
Keywords
Simple payback, Life cycle cost analysis, LCCA, Economics
References
Harvard Business Review: Net Present Value
BLCC 5
National Institutes of Standards and Technology
U.S. Department of Energy
U.S. Department of Energy, Energy Information Agency (EIA)
U.S. Department of Energy, Federal Energy Management Program (FEMP)
To Model, or Not To Model
2020-07-26
Contributor
NML
Have you ever gotten a request for a model where the answer is just a no brainer, and you feel it’s a total waste of your time to make the model? I can’t count how many times this has happened over the last decade, so I thought I’d share some of those requests and how I responded. All the requests described below are anonymous to protect the guilty.
Before that though; it’s my personal belief as a consultant that I must be a good steward of my client’s resources. Models can be very time consuming, and when deployed wisely are a good investment of time and money. However, sometimes people become overly dependent on them. These examples are meant to illustrate times when models were inappropriate.
Safety First: I had a request to determine the energy savings by adding insulation to existing steam pipes where the insulation had fallen off, been removed, or was otherwise missing in certain sections. It was probably about three meters (10 feet) of pipe. First and foremost; regardless of the energy savings medium pressure steam flowing through a pipe in excess of 180°C (360°F) is very hot and if someone comes in contact with that pipe and is severely burned the energy cost will be the least of the owners concerns. Furthermore, the time it would take to make a model of a steam pipe (likely a spreadsheet or simple TRNSYS pipe model) and report results exceeds the annual energy cost of the steam heat loss many times over. If the payback of just the modeling is several years, it’s probably not worth the modeling effort.
Bad Operations: This one should make any energy conscious person cringe. A preheat coil valve in an air handling unit located in an office building in Florida has become stuck and had been discharging 27°C (80°F) air for at least two years prior to this request. The owner wanted to know what the energy cost savings was for fixing the problem with an energy model. After politely declining to model this request; I offered this solution. First, shut off the valve to the preheat coil entirely. It’s an office building with 10-20% outside air in Florida (CZ 2); you shouldn’t need a preheat coil. After one winter of operation; if there have been no thermal comfort issues; remove the preheat coil and save the pressure drop on the fan to get additional savings.
Shades that don’t shade: Vertical exterior shades on due East and West orientations do little to reduce peak cooling loads or cooling energy consumption. This is probably the most common; ineffective request I and many other modelers have received. The request is primarily one to justify an aesthetic decision with hope of finding some cost savings. The paybacks are measured in multiple decades, yet we still see buildings with these shades being built today. At least I’ve never had a request to model exterior shades facing north. I have seen them included on buildings for aesthetic reasons, but never a model request so I think there’s still hope here.
I hope my readers can appreciate these examples and use them in practice to more rapidly respond to client requests and make the best of those limited modeling funds as well as making a positive impact on the overall environment.
Keywords
Consulting
References
Building Performance Documentation
2020-01-26
Contributor
NML
I’ve learned a lot about quality control over the last decade, and the challenges of conveying energy model results into construction documents and ultimately a real building. In an ideal world there would be goals set, some modeling to decide the best approach, and the building would be built to achieve all the performance aspirations defined in the goals as originally defined.
Unfortunately; that’s not reality…
Good documentation begins with establishing clear goals in an owner project requirements (OPR). The OPR should include parameters that inform what should be studied with the energy model as well as targets for performance levels to be achieved such as an energy usage intensity (EUI), % savings goals, etc. It’s important to be clear about these goals as you’ll need to refer back to them more than once to prevent them from being compromised.
A basis of design (BOD) is next document that establishes how goals defined in the OPR are to be achieved. The BOD includes greater specificity on performance parameters such as equipment efficiencies (kW/ton, COP, kW/cfm, etc.). These values should be informed by energy model studies to identify the level of performance needed to achieve the goals identified in the OPR.
Here’s where the first challenge to the OPR begins. The BOD is typically the first document that has a sufficient level of detail for a cost estimate with believable numbers, and where the first round of value engineering (cost cutting) often happens. This is where nimble energy models are needed to rapidly respond to the cost estimate and identify performance cuts can be made to minimize the negative impacts on the goals established in the OPR. It should be noted that I have had lower cost options turn out to be more energy efficient as alternate technologies come along that are cheaper and more efficient, but that’s the exception unfortunately.
As the design evolves into specifications and detailed drawings it’s important to keep an eye on all performance parameters so that nothing gets lost through the rest of the design process. Specifications and drawings should include all the details needed to build the building. It’s important for the energy modeler to be involved in specifications and drawing development at least peripherally. Something many experienced energy modelers take for granted is our in-depth knowledge of energy codes (ASHRAE 90.1, IECC) and ventilation codes (ASHRAE 62.1). These details aren’t always well translated by non-modelers.
The last, and possibly most important step in building performance documentation is reviewing product submittals as this is what’s being installed. Again, the in-depth knowledge of energy and ventilation codes is important here when reviewing detailed submittals. Often there are subtle details such as integrated part load values or equipment turndown that may or may not be caught by some reviewers.
Substitution requests are another instance of energy modeling tools needing to be nimble. Typically, submittals need to be turned around in less than a week, sometimes in a day or two. Addressing the impact a substitution request on the energy model can be positive or negative. Understanding the real pressures of design and construction and how models need to respond is critical in the development of tools so that we can all move forward together.
Keywords
Quality control, Documentation, OPR, BOD, Specifications, Submittals
References
ASHRAE 90.1
ASHRAE 62.1
International Energy Conservation Code (IECC)
Less Talk, More Development
2019-09-08
Contributor
NML
For the last year NML’s team has been exploring a variety of topics and technologies in order to chart a path forward. We have solved a number technical challenges and are now getting on the long road to realizing our goals many of which have been outlined in past blogs.
In order to focus our efforts on the exciting opportunities and all the technical and practical nuances I have decided to reduce the frequency of my blog posts.
Once we have a functional product I’ll resume posting with an emphasis on technical solutions that help the whole AEC industry move forward.
So long for now.
Keywords
NML
References
There can be Only One Energy Code Compliance Path
2019-08-25
Contributor
NML
A common misunderstanding in energy code compliance amongst both users and code reviewers is that there can be only one energy code compliance path used on each project.
The International Energy Efficiency Code and ASHRAE 90.1 both allow project teams to pursue code compliance via multiple paths which offers flexibility to meeting energy code. For this blog I’m calling 90.1 a code because it’s being used in that context of a compliance path to meet energy code.
Many jurisdictions adopt IECC for code compliance. IECC offers three compliance paths: prescriptive, performance and ASHRAE 90.1. ASHRAE 90.1 in turn offers two or three compliance paths depending on the version. ASHRAE 90.1-2004 to 2013 offers a prescriptive path and energy cost budget path for code compliance. Appendix G was not intended for code compliance until the 2016 version. ASHRAE 90.1-2016 offers three paths; energy cost budget and performance.
Within the prescriptive compliance paths there are numerous sub-paths for compliance of equipment such as chillers and lighting. These paths are solely contained within the prescriptive path and do not apply to the other paths.
What the codes do not allow is the mixing and matching of multiple compliance paths to demonstrate compliance. The compliance paths are a package and deviating from that package will have unintended results and likely have a lessor performing building than intended by the authors of the code.
While similar; there are differences between the current versions of IECC and ASHRAE 90.1 that do impact cost and efficiency per system. Previous versions of IECC and 90.1 were more similar and so I believe people became accustomed to using them interchangeably. ASHRAE 90.1 tends to be a bit more aggressive on lighting efficiency and lighting controls than IECC whereas IECC requires recovered heat for a portion of the domestic hot water heat among many other differences. Therefore; if the lighting design follows IECC as it’s easier to meet and the domestic hot water system follows ASHRAE; then the building will use more energy than intended. Another difference among many between; IECC and 90.1 is fan power. IECC regulates fan power under its mandatory requirements whereas 90.1 regulates fan power under the prescriptive requirements. Therefore; if you cannot meet the fan power requirements due to project constraints; you may be forced down the 90.1 path and its various implications.
A strong understanding of the pros and cons of each energy compliance path is imperative these days as it can have a major impact on the first and operating cost of the building. The recent codes have gotten so complex that it’s no longer obvious what the right path forward is anymore. I strongly recommend a long hard look at the compliance paths early in a project to determine the best path for the entire project. I can see energy code compliance bogging down projects as the project teams weigh their options. Tool development is going to be of paramount importance including automated baselines for energy models as well as the various sub-paths in prescriptive compliance as there are so many ways to make mistakes in the current codes and standards. Successful navigation of the energy code will let us move forward together.
Keywords
Energy code, Compliance path
References
ASHRAE 90.1
International Energy Conservation Code (IECC)
ASHRAE Standard 209 in the Real World - Part 2
2019-08-11
Contributor
NML
Continuing the discussion of ASHRAE Standard 209 from the July 28, 2019 blog a few weeks ago. I had addressed cycles 1, 2, and 3 of standard 209, and will address the balance of the standard here.
Cycle 4 generally works; but I don’t see it following cycle 3. It will be before, during, and/or after cycle 3. Managing the relationship between cycle 3 and 4 requires a good working relationship with the architect. If cycle 4 is going first with HVAC systems concepts, especially ones that have spatial limitations like radiant systems, then the architect needs to be aware that their decisions have impacts on HVAC. If the solar load is substantially increased then the efficient cooling solution may no longer work. Energy modelers need to be nimble with their load calculations and provide rapid feedback.
For the last six months I’ve been working on a truly fantastic façade and HVAC project that would certainly meet the spirit of cycles 3 and 4. We have reduced window-to-wall ratio, enhanced the glazing, added exterior shading, and then added radiant cooling in order to achieve code minimum air change rates in patient rooms. The radiant panel sizing, glazing selection, and shading design were done iteratively and in parallel, not in sequence. Had we followed a sequence the process would have failed in my opinion because the shading design had to be changed dramatically in DD’s because of a fabrication challenge as well as changing frequently in SD’s because of owner preferences. We revisited the glazing and the radiant cooling design and made up the difference from having a smaller exterior shade during DD’s until we had a design that worked.
Cycle 5 should really be cycle 3-5 with some of cycle 1 sprinkled in, the envelope parameters we didn’t get to when we first attempted cycle 1. Integrated design; which is promoted in LEED v4 and other performance standards is what all this should be about. The tasks defined within the standard are good ones, but the order in which they need to be resolved is not going to be dictated by this standard, but by the project team’s needs.
Cycle 6 optimization implies a parametric analysis to optimize the design. Optimized designs are easier said than done depending on the desired parameters. I could see a modeler checking the box on this one by doing the optimization, but in reality, the optimized solution may not exist. What if a lighting power density of 0.73 W/ft2 is optimal, but the products available give you a 0.7 W/ft2 or 0.75 W/ft2. Similarly, you’re not going to be able to buy a boiler with an optimized performance curve for your building. Cycle 6 seems like it’ll go down a rabbit hole and will likely yield minimal improvements since the bulk of the savings was identified in steps 3-5.
Informing value engineering for cycle 7 is a common practice as energy models are an effective cost management tool (see June 16, 2019 blog “Energy Models Save Money”). This process can take place really at any time in the design process as budget realities can set in at the earliest design stages if the budgeted poorly enough. Code compliance is probably really the biggest driver here. Ultimately a project team can change its mind on meeting the goals defined in the OPR, but it has to meet the energy code to get a permit.
“As-Designed” models in cycle 8 are mandatory if you’re using the energy model for energy code compliance or as a submittal to a green building rating authority or utility incentive program.
Cycle 9, change orders, make me cringe, because I have yet to meet a builder that keeps a close eye on this sort of thing in terms of energy. I do like that this provision is in here as a means to keep a builder from changing something without telling the modeler, but I doubt it’s going to stop the practice. This provision would need serious teeth, like not getting an occupancy permit, if it were to ever be followed broadly. It would be nice that the changes are at least recorded so you have something to inform cycle 10 before it becomes a surprise.
“As-Built” models for cycle 10 really only occur for utility incentive programs to validate that all the energy savings measures identified during the design phase were in fact installed during construction before a utility incentive is paid out. Some utilities use these programs while others do not. Also, some utility incentive programs use their own energy models for this phase as the original energy modeler may no longer be engaged on the project creating a disconnect there. A measurement and verification project may also head down this path. Again, hit or miss on compliance here.
I’m going to leave cycle 11, “post occupancy models” for another blog on measurement and verification, calibrated models, etc.
To sum up; I get the intent of standard 209, but its rigidity is ultimately a disservice to the modeler and the project. Compliance with 209 per section 4.2 is probably not that difficult for most projects receiving energy models to comply with, but many high-performance projects I’ve worked on wouldn’t comply due to the sequencing of the requirements.
The consequence of this reality is that a project schedule just can’t and won’t be dictated by standard 209 as it’s in conflict with other design and construction considerations that supersede energy performance. Now more than ever, energy modelers need tools that can be quick and nimble so when the time comes to apply all the technologies desired on a project they can be tested quickly and inform design so the project keeps moving forward.
Keywords
Modeling process, modeling sequence
References
ASHRAE 209-2018
LEED v4 Building Design and Construction
ASHRAE Standard 209 in the Real World - Part 1
2019-07-28
Contributor
NML
This blog may ruffle a few feathers, especially those folks that have bought into ASHARE standard 209, but I’ve violated pretty much every tenant of standard 209 and still achieved LEED platinum and net zero energy ready buildings. The intent of standard 209 is wonderful, but unfortunately, it’s not how buildings are designed in the real world.
A lot of the goals of 209 should be achieved by requiring a strong owner project requirements (OPR) document that sets goals for the project in general about envelope performance, thermal comfort and glare control among others to ensure they are studied and hopefully implemented as defined in section 4. The real trick is having someone enforcing the OPR throughout the project, which is easier said than done. Giving the OPR some teeth here is a nice touch, but enforcement is a challenge that owners holding the check book don’t always seem capable of achieving.
An energy charrette is a great idea, but by and large the project team would like to see preliminary energy results so they have something to react to, not before model cycle 2 or 3, but right after. A general sustainability charrette may work this way when the whole project is a blank slate and goals are set, but if there’s a focus session on energy there better be some numbers to share. Stakeholders want to see the relative impacts of window-to-wall ratio, shading, HVAC systems concepts, etc. at the start so they can make informed decisions at the energy charrette, not multiple weeks later when they’ve moved on to other topics. There may still be a few options to explore after the charrette to address a new idea or two, but really a lot of the modeling may be done at this point for schematic design depending on the building type.
The immediate next step is about space planning and cost. Where is the MEP equipment going to be placed to serve the building effectively, what type of equipment will we have to achieve the building’s goals, and how much is that going to cost. So now we’re into HVAC and plant systems testing and using previously developed similar building models, benchmarks, or shoebox models to design because we may or may not have settled on a building design during orientation and massing or it’s happening in parallel. In many cases for new campus projects we’re designing the central plant before the buildings it serves begin design. Often it’s desirable that the central plant is operational sooner so it’s ready to provide heating and cooling to the connected buildings during construction fit-out. Here again load profiles are generated from other sources that do not exactly represent the buildings being served by that equipment.
Once space planning is complete and the HVAC system is set a handful more studies will be done on other major concepts such as air-side heat recovery that have substantial impacts on duct work and potentially shaft locations.
In the era of lean and integrated design; early concept phases and schematic design fly by in a matter of months or in some cases weeks. A lot of “bolt on” concepts (technologies that can tacked on later) are now saved for design development (DD) or even construction documents that used to be tested in schematic design (SD) such as lighting and lighting controls. Pretty much no one argues about lighting efficiency anymore. It will be LED and we’ll see how efficient it is when we get to purchasing lamps because the best lamps we can purchase today will be worse than the ones we purchase after design is complete because the technology keeps improving.
I’m going to step through standard 209’s modeling cycles and comment on each as I go to maintain some semblance of structure in my comments to help clarify the points above.
Projects routinely violate cycle 1 because changing orientation is just not possible or there’s no interest in the thermal performance of the envelope yet. Emphasis on “yet” as thermal performance is usually addressed in later design phases (see paragraph 6 above, 2 paragraphs ago). I agree with starting with orientation and massing studies for many buildings. In my experience 1/3 of project teams take orientation and massing studies seriously, 1/3 don’t care, and 1/3 just don’t have a choice because of site constraints. Speaking to other engineers and modelers; I believe I’m lucky to have worked with that many project teams that care to do this step correctly. So, 209 definitely can have a positive impact on this point, but the first model cycle will only be complied with in at most 1/3 of projects.
Cycle 2 is for projects where form is set prior to schematic design. I can’t think of more than a handful of projects I’ve worked on where the form of the building didn’t change during schematic design or later. I’ve seen buildings add a floor during construction due to program changes and I’ve definitely seen entire floors or wings removed during CD’s when accurate pricing becomes available. The building massing studies will be finished once the owner has determined how much square footage they can purchase.
I generally agree with cycle 3, but the exception of process intense buildings (ie. data centers) still doesn’t make a lot of sense. Data center envelopes typically don’t include much in the way of glass or envelope loads, lighting is minimal but I suppose you could state the lighting is better than code minimum; servers are typically dictated by the function and intensity of the data center so internal loads are pretty much out; there’s very little ventilation air, but I suppose you could throw some air-side heat recovery on there, and passive strategies are a non-starter for a data center. Completing energy modeling to show a 0.5-1% savings is a waste of the modelers time. In a data center; the energy consumption outside the servers lies in the cooling plant and that’s where the effort should be focused. Skip to cycle 4 if you’re a traditional steel or concrete box data center in my opinion.
I’m going to pick up cycle 4 and the balance of standard 209 in the August 11, 2019 blog.
Keywords
Modeling process, modeling sequence
References
ASHRAE 209-2018
LEED v4 Building Design and Construction
Savings, Conservation, and Efficiency
2019-07-14
Contributor
NML
Maybe it\'s only me, but I think terminology is pretty important to clearly define what you mean and convey information in a clear and concise manner. The term energy conservation measure (ECM) is one of the most misused terms in our industry. Conservation has become synonymous with efficiency, and there\'s a significant difference. Conservation requires a behavioral change or action whereas an energy efficiency measure (EEM) is a technological choice.
For example; a person may conserve energy by changing a thermostat set point, dimming a light, or opening a window for natural ventilation. Similarly, efficiency measures include applying an automatic temperature schedule, dimming lights through daylight harvesting or automatically opening a window with an automated window actuator. These subtleties are confusing to many people working in this field so what chance do outsiders have when the experts can\'t get it right half the time.
Some EEM\'s are straight forward such as a more efficient boiler or chiller, but you can see the lines blur which causes confusion.
So why does this really matter beyond this engineer thinking he\'s right and feels the need to make a point?
I think it\'s very important for owners to know what they\'re investing in for their building and what they\'re in for over the long haul. Some decision made during design can be felt for decades to come and are not easily corrected after a building is built. If the measure they\'re investing in is an ECM then they should know they\'re on the hook for training the building staff and users as well as training new people as staff turns over in order for the measures to be effective long term.
Projects typically include a mixture of ECMs and EEMs so for the folks that don\'t want to get bogged down in semantics I would recommend energy savings measure (ESM). Both EEMs and ECMs save energy and so ESM is an effective catch all. Sure, ESM doesn\'t roll off the tongue as well as ECM, but I think it\'s a small price to pay to provide an accurate description of what we\'re talking about for both efficiency and behavioral changes to reduce the operational cost and environmental impact of buildings.
And the grammar police will be happy!
Keywords
savings, efficiency, conservation, terminology
References
A script stole my job, but I am ok with it.
2019-06-30
Contributor
NML
I recently heard a concern that if we automate energy modeling, we are going to put energy modelers out of work. I’m not sure I could agree any less with that statement, but let me elaborate as to why I feel that we’re going to be well employed for years to come. We just might not be precisely doing tomorrow what we’re doing today.
Speaking for myself and many people I’ve spoken to over the years, energy modeling has become a total grind of clicks, scrolling, and lots of copy/pasting to fill out forms. And don’t forget a mountain of QC effort that is its own form of torture to find the one input out of thousands that may be throwing off the model results. At this point a script is better at completing these tasks because energy modeling by and large is a road well-traveled, and humans are error prone at repetitive tasks.
I remember several years back a non-modeler that worked with me frequently asked, “why are there so many people with advanced degrees working on energy models?”. The simple answer is that ten years ago we didn’t know how to model many systems reliably and so it required a team of people with a strong background in building physics to diagnose the models and develop complex work arounds to get the desired results. By and large, these methods are established and/or the modeling tools have improved enough so that these skill sets aren’t needed as much anymore. Consequently, you now have a lot of people with advanced degrees clicking, scrolling, and copying/pasting to fill out forms as that is now a large fraction of the workload since much of hard stuff is done. Needless to say job satisfaction isn’t at an all time high.
I and many of the other advanced energy modelers are branching out into different yet related fields such as CFD, daylighting, water modeling, code/LEED, occupant behavior, acoustics, etc. to differentiate ourselves and use our skills to advance the industry. These areas are fertile ground with a lot of groundbreaking work yet to do so this is what all those modelers should be working on as that is not ready to be scripted just yet. And even when those items are ready to be scripted, we’ll find even more new work. There’s so many emerging challenges that again I’m not concerned about future employment, but that’s a topic for a future blog or two.
Finally, we’ll never fully eliminate the position of building physicist/energy modeler. There’s always a new system configuration or component that has yet to be invented that will need to be modeled. And while the average knowledge level related to energy modeling has improved considerably, there will at least for the near term, need to be someone that can interpret the results and determine if they are realistic and in the best interest of the project. Furthermore; energy codes and standards are always being updated and defining design impact needs to be investigated. Also, a proliferation of rating systems that address many sustainability topics including building physics aspects are becoming more ubiquitous such as WELL and Fitwell that will push us for years to come.
Through intelligent scripting and automation, we can allow the industry to rapidly and cost effectively cover the well understood aspects of building energy modeling and instead focus our efforts on new challenges to move the entire industry forward together.
Keywords
scripting, automation
References
WELL-International WELL Building Institute
Fitwel-Center for Active Design
Performance Modeling Saves Money
2019-06-16
Contributor
NML
More than a few people have asked what I do, and then ask why? They\'re not looking for the \"to save the world\" answer, but the more the functional daily reason. Or more specifically they ask, \"why do people pay you to do what you do?\"
Unfortunately; the answer gets right into the weeds of building codes, but in a single sentence; \"I use computational models to prove my building design meets the intent of the building code while not following all the suggested guidelines that may be more costly.\"
A good example of this is that most current energy codes require air-side heat recovery that typically takes on the form of a total energy wheel [ASHRAE 90.1-2010, 2012 IECC]. A quick tangent here; energy code compliance using the performance path is based on demonstrating the Proposed case (actual design) has a lower annual energy cost than the Baseline case (code model) to get a permit. What I\'m going to demonstrate in the paragraphs below is that you can eliminate a total energy wheel priced at between $2-3/cfm with an energy model. For a 10,000 cfm air handling unit this could save between $20,000 and $30,000 dollars, which more than pays for the energy model.
Back to the total energy wheel. Recent energy codes, ASHRAE 90.1-2010 and later, correlate total supply air flow and outside air fraction to determine if your air handling unit qualifies for needing air-side heat recovery. Just about every air handling unit I work on is about 5,000 cfm or larger and about 20-30% outside air. You\'ll find these in many medium and large office buildings, clinics, hospitals, education facilities, etc. In return air systems, the return air mixes with the outside air to moderate the temperature prior to conditioning, or in essence doing a lot of the work of the air-side heat recovery system.
A brief note on the physics of air-side heat recovery. They tend to be good at saving heating energy (large temperature difference), marginal at saving cooling energy (small temperature difference), and consume additional fan energy to overcome the static pressure drop of the heat recovery device as the air passes through it. Heating systems typically consume natural gas and cooling and fans typically consume electricity.
As stated, energy code compliance is based on energy cost. Many of these buildings consume electricity and natural gas as their primary utilities. For the last several years natural gas prices have been historically low and consequently the heating saved through energy wheels has not been that valuable. Natural gas is also a traded commodity and so the prices tend not to vary too much from location to location, say +/- 30% [EIA]. Electricity prices on the other hand can vary pretty dramatically from the low rates in the Pacific Northwest with low cost hydroelectric power, to higher rates found in many other states that are a factor or three or four higher [EIA]. Consequently, the modest cost savings for heating and cooling and the relatively high fan energy consumption due to increased static pressure can offset any savings. Therefore, the Proposed case without air-side heat recovery can have lower energy cost than without a wheel thus saving both first cost that more than offsets the modeling cost, and ongoing operating and maintenance cost savings.
It\'s important to do this analysis each time rather than relying on rules of thumb due to the quantity of variables at play for this type of analysis such as weather, usage profiles, utility rates, etc. Finally, the carbon analysis on this example is also mixed and does not lend itself to rules of thumb either. Some parts of the world have relatively clean electricity, while others are not [U.S. EPA e-grid].
By and large owners aren\'t looking to cut costs, but are more looking at maximizing their return on investment. So if I save a project money with a model, more than likely that saved money goes right back into the project.
Keywords
performance path, cost management,
References
ASHRAE 90.1-2010
2012 International Energy Conservation Code
US DOE Energy Information Agency
US Environmental Protection Agency – eGrid
Annoyances become roadblocks with scale
2019-06-02
Contributor
NML
A paradox I\'ve noticed in the energy modeling community is that large buildings tend to have the biggest budgets and can make the most use of modeling tools to inform design, but those are exactly the opposite of the types of models used to develop and test new tool enhancements. The consequence is that many tools struggle with scale.
I totally understand wanting to use a relatively simple model for development purposes because a model that can handle 10 zones should be able to handle 100 zones, and so on. Also, who wants to wait around for the model to run to determine if the tool works. Finally, who is going to spend 2+ weeks creating that model and what are they going to base it upon to make it realistic. The answer is tool developers should be asking users for those models rather than creating them.
Last year a 200 zone e+ energy model was donated to an energy model development team to stress test one of their tools. The response we got from the developer was; \"our tool choked on it\". For perspective, the largest e+ model I have seen was a 700,000 square foot (65,000 m2) hospital with more than 1,100 zones so this 200 zone e+ model was hardly the most cumbersome model to be developed. Also note that this 1,100 zone model does represent a simplification from the 4,000 zone real building it represented so it wasn\'t made unnecessarily complex.
While large buildings represent a relatively small fraction of buildings, they do represent a large fraction of energy consumption and total square footage [CBECS 2012]. These buildings have the most opportunity to improve their energy performance so logic would suggest that the energy modeling tools might cater to this user group if the intent is a global impact on energy performance.
A number of shortcomings observed in numerous tools working at scale include the following:
-Slow data entry methods,
-Lag between model changes and refreshes,
-User interfaces that can\'t display all pertinent information, and
-Very long run time
Various tools have taken differing approaches to resolve some of these issues, but all the tools I have tested have at least 1 or more of these shortcomings, which makes modeling for users designing large buildings slow and painful.
To summarize; as model sophistication, scale, and detail increases, computation and setup time increase. Little inefficiencies compound into big problems if you have to repeat it a thousand times. There\'s means such as global data entry methods and databases, different approaches to software design, testing user interfaces, variable time steps and run periods, etc. that can resolve these challenges. Let\'s apply these best practices so we can move forward together.
Keywords
Scale, model complexity, run time
References
US DOE 2012 Commercial Building Energy Consumption Survey
EnergyPlus
More on the Open-Source BPM Library
2019-05-19
Contributor
NML
I wanted to circle back on my brief 28 January 2019 blog on Open-Source BPM Library to add some greater clarify. One of the primary reasons I didn\'t stay in academia despite some encouragement to do so was the fact that I didn\'t like to see great ideas sitting on shelves. It frustrated me that millions of dollars of research funding was going into experiments and then was documented in a report that few people will ever read.
It even got to the point that we\'d joke about leaving a $20 bill in our dissertations in the library and returning a few years later to see if anyone had actually read it. I\'ll pause here for a moment while people run to their local university library to check if there\'s thousands of dollars buried in all those dissertations and theses...Hopefully you\'re back and not angry with me that all you found was 25 post-it notes with a thank you and a smiley face because someone else got there before you.
Building upon the data schema blog (see 21 April 2019 blog); I\'d like to offer the following. What if those students publishing in the realm of building performance modeling had to make their contributions compatible with a standard BPM schema. This way the entire modeling community could benefit from their work.
I had an intern a few years ago that as part of his thesis wrote an EnergyPlus component for a sewage heat recovery system. As far as I know this code is sitting on a shelf somewhere in his advisor\'s office. If we\'re lucky it\'s uploaded to a website that I don\'t have a URL for at the moment. What should have happened is that this code would have been uploaded to a common website where everyone could take advantage of it.
This practice has happened before. The University of Wisconsin - Madison Solar Energy Lab hosted TRNSYS components for years that came from it\'s graduates on their website. These components were free to download and incorporate into TRNSYS.
A modern version of this same system would be ideal with a few lessons learned from the SEL page. The process of incorporating a new TRNSYS component was a bit laborious in that you needed to copy multiple files into multiple locations in just the right way to get it to work. Also, you might need an updated dll file to make it all work. Let\'s just say it was nearly the opposite of user friendly, but it was better than having to write the component yourself.
With a common data schema and a more cleverly designed data handling it should be as easy as downloading an app on your smart device. This approach would allow the BPM community to leverage what\'s being created and allow us all to move forward with unlocking all the knowledge that exists in our industry.
Keywords
sharing, data schema,
References
EnergyPlus
TRNSYS
University of Wisconsin - Madison Solar Energy Lab
There is a standard for that?
2019-05-05
Contributor
NML
I on-boarded some new staff recently (fresh out of school) and I got a \"deer in the headlights look\" when I started talking about all the standards and guidelines out there that they would have to familiarize themselves with over the next year to do their jobs.
In my opinion; to be a competent energy modeler and consultant these days you have to have at least a passing familiarity with thousands of pages of standards. I know I touched on some of this in the 10 February 2019 blog, but it bears repeating. It\'s probably too much to expect to be familiar with all the content, but if you add ASHRAE 90.1-2016 [388 pages], 62.1-2016 [60 pages], and 55-2013 [58 pages] together you get 506 pages of content for the current standards let alone the previous versions. Pile on LEED v4 BD+C [817 pages] and you are well past 1,000 pages of content to sift through for current projects. Sure not all of LEED is applicable to energy modelers, but many of us act as LEED admins as well so yes; I have looked at the entire LEED reference guide. Lest we forget, LEED v2009 [674 pages] references 90.1-2007 [288 pages], and LEED v4 references 90.1-2010... you get the idea. And those of us that work on specialty building types like labs or hospitals have additional standards and guidelines to follow. There is just too much content for anyone to absorb anymore.
I feel lucky that I entered professional practice in 2009 when there was about half as much content as there is now so I have had time to absorb it all. Joining the industry today must just be overwhelming to new people with the shear volume of content already available, and new content being added every day.
In another meeting recently; an experienced ASHRAE member was talking about ventilation and noted that he had never heard of Standard 161 when it was referenced in a document. I had to look it up too and found out that it\'s specific to aircraft ventilation. While I don\'t work on aircraft; I do consider myself very competent about the topic of ventilation and thought I should have known the standard number by now. I started looking for a complete list of ASHRAE standards and found more that I had never heard of. After doing this job for more than 10 years I have to say I was a bit disappointed in myself that I was not aware that some of this information even existed. There is a lot of information buried in a lot of places that\'s just too hard to find even with wonderful search algorithms at our disposal.
But is it really practical to know it all anymore? Rather than sifting through the dozens and dozens of standards and guidelines available to the AEC industry to pick out the pieces of information applicable to your project; it would make more sense for the modeling tools to have the standards imbedded within their libraries. That way the information is flagged and the users can\'t miss them. Furthermore; as the tools receive updates; addenda and errata could be included. Addenda to standards are published all the time as committees complete their work so unless you\'re closely following all the standards that pertain to your work you may miss something critical or useful.
Imbedding standards and guidelines into tools would also alleviate some pressure on energy modelers from needing to know the energy modeling standards in addition to the design standards. For example; there should be enough intelligence in the tool to identify that a modeler entered insufficient ventilation into a space per ASHRAE 62.1 or whichever standard is selected; and then provide a link to the specific portion of the standard that is applicable.
Finally, the icing on the cake would be if the standards setting bodies issued their updates within a common data schema (see 21-04-2019 blog) so that all developers could simply update their software rather than leaving the interpretation of the standards up to the tool developers to incorporate. Rapidly and uniformly accessing standards content in a meaningful and targeted way would allow the entire industry to move forward in a big way.
Keywords
Standards, learning curve, data schema,
References
ASHRAE 55-2013
ASHRAE 62.1-2016
ASHRAE 90.1-2016
ASHRAE 90.1-2010
ASHRAE 90.1-2007
ASHRAE 161-2018
LEED v2009 BD+C
LEED v4 BD+C
Excuse me; but I do not speak your language
2019-04-21
Contributor
NML
I fondly remember a recent BPACS conference where there appeared to be a ground swell of discontent on the fact that energy modeling had devolved into a job where you copy and paste data between tools and then paste the results into a form or a report. I\'m sure that function is not 50% of our time, but it just feels like it since it\'s so tedious.
It is rare for me today to start and finish an energy model in a single tool. Different tools have different specialties and so I and much of the rest of the energy modeling community has become fairly adept at splicing together results to create the desired finished product.
While it may be a long time before disparate tools can communicate with one another it would be great if the copy and pasting could be more streamlined. One of the first steps of copying and pasting data from one tool to another is to convert units. Different tools use different default units and only some of them enable you to adjust these units within the tool. Many have different naming conventions and sign conventions for the same value.
Ultimately what\'s needed is a tool agnostic data sharing schema that can act as an intermediary between all of the different tools. That way; I can send my time series Btu/hr and Fahrenheit values from eQUEST into my TRNSYS models that default to kJ/hr and Celsius, and get it back into eQUEST results in MMBtu and Fahrenheit.
This data sharing schema would enable the entire industry to skip a tedious, and potentially error prone step, that is annoying to many of the users. More streamlined data sharing for everything from climate tools to water models will help move the industry forward.
Keywords
Tool agnostic data sharing, data schema
References
Never Ask a Spreadsheet to do an Energy Modeler\'s Job
2019-04-07
Contributor
NML
Recently an engineer approached me wanting to complete an hourly chiller plant optimization for a client in a spreadsheet. It was estimated that the spreadsheet model would need 16 hours of run time and even then it would not include everything that an energy model would include in terms of performance curves, weather data, etc.
I had to ask myself, \"why would anyone do this?\". The simple answer is because the engineer was not an energy modeler. What really bugged me is that I learned how to do this exercise in TRNSYS in 2009 using a genetic optimization algorithm in the training class. Yet here we are ten years later and a mainstream engineer has not moved on from spreadsheets.
Fortunately it occurred to the engineer that they should ask for help, but not before investing numerous hours in the near futile effort of creating what would have probably become a monster spreadsheet. A spreadsheet so cumbersome it would have been impossible to debug had it not worked out. Further, it would likely have been so complex that sharing the spreadsheet for reuse would have been tricky at best.
To be clear; I love spreadsheets. It is a great place to complete simple calculations and prototype more complex tools. Most functions in spreadsheets are intuitive hence why so many people use them for thousands of purposes. However, since spreadsheets are so intuitive people often go overboard and make massive tools that are impossible to debug and alter. For example; what if the creator of the chiller plant optimization spreadsheet wanted to add a thermal storage tank to the tool. The complexity of the tool may not allow it say nothing of having someone else make an addition to the spreadsheet.
Conversely energy modeling is not intuitive and is just too inaccessible for many people. Clearly there is a market need for the modeling, but training for a one-time or infrequent use is wasteful. To do something as sophisticated as a chiller plant optimization; you are either an energy modeler or you are not. That is a terrible position that we find ourselves in if we want to make energy modeling more impactful as an industry.
We asked ourselves if we should create a model, and then teach the engineer how to use it so they could make minor modifications. However, that seemed impractical because there\'s just too many ways of breaking these complex energy models.
What energy modelers need is a means to create quality canned models that can be partially locked down much like spreadsheets where individual cells or in this case inputs can be locked down. This should sound familiar to the TRNSYS community; it\'s called a TRNSED model, but as far as I know the concept is limited to TRNSYS. TRNSYS has its own baggage that I don\'t want to get into right now, but the TRNSED concept is sound.
In my experience, all but the most simplistic energy modeling tools require training such as System Advisor Model or RETScreen, and even then it is not a slam dunk. Most modeling tools are not designed to be intuitive or at least most are not tested for that. If we want to move forward as an industry; we are going to have to expand the user base and make energy modeling accessible to a wider audience.
Keywords
Accessibility, intuitiveness, learning curve,
References
TRNSYS http://www.trnsys.com/
SAM https://sam.nrel.gov/
RETScreen https://www.nrcan.gc.ca/energy/software-tools/7465
Uncertainty 101
2019-03-24
Contributor
NML
I would like to go more into the fundamentals of energy modeling so we are all on the same page. First and foremost energy models are used to compare between design options to help inform design decision; A is better than B, but not as good as C. In general the tools do this fairly well for most energy savings measures.
We start running into issues when we look for absolute values between two options and even worse, predictions of future performance such as a utility budget for a new building. Energy models are built upon physics based algorithms that use a lot of assumptions around weather, occupancy, and usage schedules.
None of this stuff is rocket science, but there are literally hundreds of variables to keep track of that move in different directions and therefore energy modeling by its very nature uncertain. That does not mean we should not use energy models; it means the results need to be used appropriately.
All of these assumptions have margins for error within them that can radically change the results. The easiest example is occupancy schedules. Lets say you have a hotel that has a typical occupancy rate of 60% for the year. You install some energy savings measures that setback lights and equipment when the building is unoccupied. Your local tourism bureau picks up a lot of conferences and all of a sudden your hotel occupancy is up to 80% for the year and your utility bill is much higher because of all the additional building occupants using lights, heating and cooling, and domestic hot water. At the end of the year the building owner may or may not make the connection between occupancy rate and utility consumption and therefore might take you to task for not achieving the utility bill savings you promised. Similar things happen with the weather where heating and cooling annually can fluctuate by 30% or more depending on a cold year or a hot year versus the typical year that are used in energy model simulations.
What modelers need is a simple way to do sensitivity analyses to run dozens of simulations around each energy savings measure that tests the impact of extreme weather, schedule changes, and other factors and calculate a confidence interval.
There are Monte Carlo simulations and the like that ESCOs frequently use on energy savings measures such as lighting retrofits, but these are not generally integrated into something as complex as a whole building energy model as it would be computationally intense for so many variables. National labs do these types of tasks on supercomputers, but that is not a resource available to most people on a building project so we need something simpler. Maybe one day we will get to the point computationally where we can run 20 to 30 energy savings measures on a large building with 10,000 simulations per savings measures to get an optimized value, but for now I would like to see something as simple as a chart with an error bar that does not take an extra day or two of running simulations manually to create.
Some readers may say to themselves; have you heard of parametric tools? Many software have had parametric functions built-in for years. However, many of these parametric tools work best with simpler inputs such as lighting power density or glazing types. Depending on your needs that might be sufficient. However, more complex changes such as changing HVAC system types and the like are out of reach for most modeling tools and have to be completed manually. So the start is there, but we have a ways to go to make parametrics robust for all types of energy savings measures.
Finally, the ability to generate data and useful chart(s) is something that all modelers need. Let us take this one step further and create a robust parametric tool that can generate data, display it in a meaningful and intuitive way and reduce model uncertainty so we can move forward together.
Keywords
Uncertainty, parametrics, error
References
None this time, just my thoughts.
The Hunt for the Right Information
2019-03-10
Contributor
NML
More times than I can count have people forgotten what I needed for my model, provided the wrong information, or forgotten that I provided what I asked for. By and large I think this is because people are overworked and receive information from too many sources to keep track of it all. I myself remember that I have received something but just do not remember by what medium (text, email [which account?], instant message [which account again?], voicemail, etc.).
Some firms send out multi-page forms or questionnaires to get at the right information. Unfortunately no one wants to fill out or no single person may be able to fill out as energy models are assembled from many peoples input. When the model deadline is upon you and the information just has not been made available; modelers make something up as bad as that sounds. Referencing back to the 2019-02-24 Benchmark blog, this is when having a good database of inputs really comes in handy.
Energy models are the culmination of the entire project design and therefore are not done until the last piece of information is entered. The information comes from a variety of people; architects, MEP engineers, landscape architects, civil engineers, etc. Many of these people may have never opened an energy modeling software and depending on the modelers they have worked with in the past may never had the desired information requested. You could be working with someone who literally has never had to work with an energy modeler before even in 2019.
I have found that it helps to sit in the same room as the folks who are unfamiliar with the modeling in order to explain what precisely is needed. Unfortunately that is not always practical. With more connectivity than ever before working out of state, but not traveling to that state is becoming quite common. Borrowing a page from the Revit® play book, why not have everyone log in to a shared model? Today it is not uncommon to have remote design coordination meetings and it will only become more common with the advent of virtual and augmented reality.
Permissions could be setup in the energy model so that for example an architect can see/edit envelope parameters only if that is desired. This not only prevents people from editing portions of the model they should not, but it streamlines the process. A key problem with energy models is that there are thousands of potential inputs. An architect does not want to get bogged down with fan performance curves and air-side heat recovery any more than an electrical engineer wants to get bogged down with domestic hot water consumption and solar heat gain coefficients. Tailoring the relevant inputs depending on who you are working with will speed up the process and a customizable experience is generally appreciated by most people.
Each input and output could have a chat window and tracking log of what was done so there is a digital paper trail of how the input was specified, and what still needs to be done. This way, rather than having a dozen streams of information it could all be in one place. Finally, a dashboard of sorts that shows what parts of the model are complete would be helpful to keep track of what is done and what is still needed.
This way there is a central reference point to collect and disseminate information related to the energy model so we can all move forward together.
Keywords
Data Gathering, benchmarks, data sharing, collaboration
References
AutoDesk: Revit®
Benchmarks, Benchmarks, Everywhere
2019-02-24
Contributor
NML
Building energy benchmarks have many functions including; energy model quality control, comparisons across energy models, comparison of modeled results to real performance, and a means of judging your buildings performance against you peers as well as others.
There are numerous sources of energy benchmark data including public (national and local) and private (free and paid) repositories across numerous building types. The types of data available vary across the data repositories, but possibly most importantly the measurement methods or inputs into the models and boundary conditions can vary. These differences across datasets leave users with some level of uncertainty in comparing results. Needless to say the fragmented nature of benchmarking should be addressed through either a consolidation or at least a translation of the databases so users are not left wondering if the data is consistent.
Digging into the energy modelers point of view; the granularity of benchmark data has value. For example; if whole building energy usage intensity data (kWh/m2-year or kBtu/ft2-year) is available then the modeler can get a sense of how their whole building result compares to the benchmarks. However, if their values are high or low, then the modeler is left to their own devices to determine what to change in the model to head in the right direction. Even if the total number is correct, end-uses may be inconsistent.
The next level of granularity is energy consumption broken down by end-use category (lights, fans, pumps, heating, etc.). The modeler can then identify which specific energy type(s) are the source of the problem, and can then investigate what the problem might be. This is much better than the whole building energy level, but still leaves modelers with an open ended search with many possible inputs to review.
I recall a group of students in grad school making energy models in eQUEST, EnergyPlus, and TRNSYS of our Solar Decathlon house and comparing it to the measured data. The whole building values were pretty similar, but the end-use break downs were all substantially different from each other and from the metered data.
The most granular level of benchmarking that modelers need is input level values. Experienced modelers know what reasonable inputs are for envelope, lighting, mechanical systems, etc. Default values are often dated, and code required values are often out of date as well or difficult to achieve. For example, ASHRAE 90.1-2016 reduced lighting power densities to be in line with LED lighting as that had become mainstream. However, efficient LED lighting was being used in projects more than three years prior to 2016.
I recall a conference presentation made by Pam Berkeley of UC Berkeley (no relation as far as I know), where she had 12 modelers attempt to model the same building with the same data. One would hope that all 12 model results were similar, but unfortunately that was not the case. That result should be very troubling to everyone in the energy modeling industry concerned about our credibility.
There are thousands of inputs that can go into an energy model that can skew the results. A benchmarking system that is integrated with the energy modeling tool could use room or zone level defined lighting values and share those anonymously to see where the industry is trending. Furthermore, the zone level input feedback could flag values for users to identify the root cause of their misaligned energy values. Finally, energy modeling tools should be fully integrated with benchmarks for outputs and should enable seamless cross references across public and private databases.
Applying benchmarks as a means of quality control has become very common for most firms. However, there is a lot of room for improvement to move benchmarking forward together.
Keywords
Benchmarking
References
ASHRAE 90.1-2016
Berkely, P.; Haves, P; Kolderup, E. Impact of Modeler Decisions on Simulation Results. 2014
ASHRAE/IBPSA-USA Building Simulation Conference. Atlanta, GA. September 10-12, 2014
Transparency
2019-02-10
Contributor
NML
Distrust in the energy modeling community has been around ever since it went mainstream more than ten years ago. I have seen criticisms from people who just do not like the results the modeling is showing them all the way to litigation against the USGBC about buildings failing to perform. Green building rating systems and standards are far from perfect, but they are not malicious. They are in many ways obtuse and confusing to the point that people make honest mistakes and buildings may not perform as advertised. So why the distrust and in some cases hostility towards the energy modeling community that makes up a big part of the green building community? My guess is it is a lack of transparency.
I have been doing a bit of commissioning work around the country lately. In multiple cases I have been assisting other engineering firms with energy code compliance and their lack of knowledge about 90.1 is disturbing to me. In two separate cases 90.1-2007 compliance was questionable and that standard is dated to say the least. All the firms I am working with have good reputations and are what I would consider competent from a design perspective. So why is the energy code and items as simple as a fan power allowance so confusing to so many engineers across a large swath of the industry? I have seen the same issue with architects on envelope compliance, but that is a topic for another day.
At roughly the same time I began reviewing ASHRAE 90.1-2016 for a new project where this was adopted as code by the authority having jurisdiction. As a career energy modeler and consultant I found the new standard useful in clarifying several points (air infiltration and snow melt controls for example), an overall incremental improvement over previous versions. However, when I step back and look at a standard that is now 388 pages long (90.1-2004 was 186 pages long) and has multiple compliance paths I feel people who are new to this industry or casual users of the standard will be confused and/or intimidated by it. Say nothing of how this standard has to interact with the myriad of other codes and standards that govern our industry. Again, I am not saying the more than doubling of content is not worth while, but maybe a different means of interacting with the standard is necessary.
I think back ten years to when I started and I had to understand a few hundred pages of standards, relatively simple mechanical systems, a relatively simple green building rating system, and the handful of energy modeling tools available at the time. Now the expectations are higher and the learning curve has grown by a factor of three or four in every category I just identified. However, the tools we have to climb this much steeper and more challenging mountain have not kept up at all. So much so that people have given up; including the code officials in some instances. In 2018 I was told on three different projects that the code officials did not want an energy report because they just did not want to deal with it. If it is not COMCheck, they are not interested. It does not mean we did not get a permit (the projects did comply), there just was not anyone to review it. Maybe it is because the code reviewers do not have the training to review it, maybe because they receive so many garbage energy models that they just do not want to both reviewing it. The reason does not matter so much. However what happens when people just say we do not need to demonstrate compliance anymore.
As a person that generates these reports as a notable fraction of my work, I get nervous about my livelihood if no one wants the report anymore. What happens when energy codes are no longer enforced because they are simply too confusing to users? I think a lot of the blame lies in the tools we use in our industry to demonstrate code compliance. The users should not have to read the entire code, the tools should know the code and provide guidance to the users on what is wrong and then point to sections in the code as needed so they know how to fix it. No one, except maybe me and handful of other code nerds, reads a standard cover to cover anymore. What users want is to be told when they are falling short or not, and this should be very easy for a tool to understand and clearly document. An energy model should have a report that is as clear and robust as a COMCheck so that the code reviewer should not care what the source of the compliance report is as long as it is reputable and say pass or fail.
Code compliance is such a fundamental part of our work that it should be an automated slam dunk not just for the energy modelers, but for everyone involved in delivering and reviewing the results. Let us move code compliance forward together.
Keywords
Energy code compliance, transparency
References
ASHRAE 90.1-2004
ASHRAE 90.1-2007
ASHRAE 90.1-2016
Open-source BPM Library
2019-01-27
Contributor
NML
Have you ever felt you have reinvented the wheel more than a few times? I recall one of my fellow TRNSYS users was trying to model an HVAC system serving a pool with an integrated DX heat recovery system for a client. It took him several weeks of trying out control logic and system configurations before he felt it was good enough to share. I remember him saying, there is probably an engineer sitting in a cubicle at the manufacturer that has the bit of information I need to make this model work. This thought has probably crossed most energy modelers minds in the search for performance curves and the like.
Fortunately some of us that have been at this for a while have made connections throughout the industry and have collected these key bits of information. Unfortunately sharing these bits of knowledge is not always so easy. New modelers especially are left with limited options and a steep learning curve often not even knowing which questions to ask. User forums, databases, websites, and publications have scattered this information in multiple locations; some more accessible than others. New modelers who need the most guidance might not even know where to look for the information.
A better way of sharing knowledge and indeed whole models needs to be created so that modelers can spend more time modeling and less time searching for information to reinvent the wheel. Lets move forward together.
Keywords
Building performance modeling, knowledge sharing
References
TRNSYS
What Practitioners Need
2019-01-14
Contributor
NML
Energy model practioners have a long list of needs, and maybe thats part of the problem. Software developers hear complaints from around the world and are left wondering, what is most important thing to improve upon next. Invariably many users feel like they are not heard if their need is not filled in the next version. The national labs seem primarily focused on developing software for themselves that is then made available to the public which may or may not be useful.
My first blog titled, The Current State of Affairs focused on speed of model delivery, and that is going to continue as a theme for a while as it is the key symptom practioners face on a daily basis. Modeling tools developed by national labs and universities tend to not understand the practioners need for speed. Models I created in grad school that took months to complete now ideally need to be completed in days. For very complex models we can still buy some time, but I cannot recall a time recently when energy model results were not desired sooner than they were available. So lets break down what is the roof cause of the speed issues, and what better software might be able to solve as technology is not the answer to everything, but it definitely plays a significant role.
Data collection is the first big challenge, and often leaves the modeler making assumptions that unfortunately they will be held accountable for later in the project if proven wrong. Short of having years of experience that can fill in these assumptions quickly/accurately, new modelers are often left agonizing about what assumptions are realistic. Even experienced modelers know that certain inputs can have an enormous range of values which again can lead to challenges down the road. If you have to push through this phase quickly because you need results tomorrow, would not it be great to have a centralized repository of benchmarks so you are not left to your own devices?
Data entry can be extraordinarily time consuming depending on the level of detail or scale of the project. Many modelers make simplifications to streamline the process, but that can come back to bite you if you have simplified a detail that someone asks about at a later date. You never know what that question is going to be so once again you are left agonizing over what to do which consumes time. Flexible geometry is available in some tools, but not all so once you have set the geometry, you are committed to a path. Facade parameters, lights, equipment, occupants, schedules, etc. all require some thought. There are usually default values in each software, but some of those defaults may be out of date, especially lighting. Again, would it not be nice to have a centralized repository of benchmarks that you could rapidly deploy so you are not slowly entering inaccurate data.
Quality control and unmet load hours can be what takes the most time to deliver a model. After more than a decade of working in the energy model realm I can count on one hand the number of models I have created that did not have some sort of issue that needed to be diagnosed, which again takes time. Sometimes that is a data entry error such as have 15 W/ft2 rather than 1.5 W/ft2 in 75 patient rooms and not noticing. The equipment just seemed high, but what did I know it was only my 2nd healthcare project so maybe 45% plug load is realistic. Having good benchmarks again would have flagged this as an input outside the normal range of 1.0 to 3.0 W/ft2. Unmet load hours are sometime easy to fix and sometimes take weeks if the root cause of the unmet hour is not known. I used eQUEST for a few years before noticing that coils with a defined discharge air temperature did not always autosize correctly. I learned that sometimes coil capacities need to be hard coded in order to get the right discharge air temperature. It would have been great to have some good diagnostic tools that showed where the problem lies, and better yet software that did not have intermittent bugs. I might have had ten similar air handling units and one or two would have the issue.
I am not going to get into using multiple tools for a single project now because that will double the length of this post. Stay tuned for that one.
Finally, exporting results can be problematic if you want anything more than canned reports for new and solo users. All tools I have worked with have some sort of on board reporting, but for the most part they are not what I would call aesthetically pleasing. Professional modelers want to deliver professional looking reports to their partners and clients and so the native reports are generally only used for compliance. Many firms want to have a standard look and feel to their reports so the format does matter. All sorts of scripts to process data and charts to describe results have been created from energy model results over the years. There is even an IBPSA competition, Project StatSIO that demonstrates what modelers need. But I am not aware of any tools that allow you to create customized report formats and looks within the tool. So again, the user is left to manually download results and post-process them to meet their needs. What I wrote here was all true when I first started creating energy models more than ten years ago. I think it is high time we all moved forward together.
Keywords
Building performance modeling, modeling workflow, data collection, quality control, unmet load hours, reporting, benchmarking
References
IBPSA Project StaSIO
The Current State of Affairs
2019-01-01
Contributor
NML
Energy modeling goes back many decades with numerous tools developed for all manners of purposes. So why are so many buildings still not benefiting from energy models? While I'm sure there are many reasons, cost or more specifically speed (time = money) is a big factor based on my experience.
The tools have incrementally improved over the decades from text-based editors to tools with graphical user interfaces. This has sped up the process of generating energy models greatly and has reduced the barrier to entry for many modelers. Yet market penetration is still relatively low after 30 or 40 years of development. The time it takes to collect information, generate a model, test different measures, complete quality control, and present results is often measured in weeks or months. Reliability has improved and some features have been added, but the native interface for EnergyPlus and eQUEST are largely unchanged since I first started using them in 2005. I can't think of any other software that has not had at least one complete overhaul in that time period let alone several. Sure the Microsoft's and Apple's of the world have deep pockets, but what they deliver is infinitely more complex that what we're doing. So why are we where are?
There is clearly a market demand for energy modeling as energy codes become more stringent and just a general drive towards reduced energy consumption for a wide variety of reasons (cost, environmental impact, resilience, etc.). The demand has increased yet we users are generally left with a 1990's software experience or worse. I think its safe to say that by and large the energy modeling software industry has disappointed its users. Not to say that the developers aren't trying hard, but maybe the understanding just isn't there. At a recent energy modeling conference I spoke with many software developers about my needs as a practioner. It was generally lost on everyone I spoke to. The approach remains the same, partial solutions, scripting, and inaccurate models that leave users cobbling together a package of results for every project that is slow and frustrating, and therefore expensive.
There are hundreds of modeling tools available to users primarily because so many users are disappointed with their experience so they develop their own tools to fill a need. This has compounded the problem in some ways because the tools were mostly developed in isolation from each other by disparate groups. Energy modelers in some cases have become data handlers that copy/paste/reformat data from one tool into another and then back again. This has spawned the scripting movement focused primarily around Python, but scripts have been around for a while now. The unfortunate side effect is that now energy modelers need to be comfortable with coding. In my experience only about 10-20% of modelers fit this mold and so the vast majority of users have been left behind still manually copying and pasting data into tools.
I ask the modeling software development community; if we can have hundreds of app developers create thousands of apps for iOS and/or Android and they are interoperable with a great user experience, then why not for energy modeling and building performance modeling as a whole. This concept has been around for more than a decade yet here we still sit on our data islands manually copy/pasting data like it's the 1990's. I think it's high time we moved forward together.
Keywords
References
Contact Us
Don't Be Shy, Say Hello.
linkedin.com/company/neumodlabs
bonjour@neumodlabs.com
github.com/neumodlabs
+1 608-571-3850