Bloom Housing
Leveraged responsible AI for affordable housing data & reporting

Client:
Bay Area Housing Finance Authority
Role:
Design lead
Skills:
User research, Design concepts, Usability testing, Developer collaboration
Bloom Housing
Leveraged responsible AI for affordable housing data & reporting

Client:
Bay Area Housing Finance Authority
Role:
Design lead
Skills:
User research, Design concepts, Usability testing, Developer collaboration
Bloom Housing
Leveraged responsible AI for affordable housing data & reporting

Client:
Bay Area Housing Finance Authority
Role:
Design lead
Skills:
User research, Design concepts, Usability testing, Developer collaboration
Problem
Bloom Housing had been successful in centralizing affordable housing listings and applications data within the Bay Area. One of the promises of this effort had always been the value of the insights that could be extracted from the data collected over time. Given the scale of the effort and the diversity of needs, data reporting had remained a manual process with everyone doing it their own way due to liability concerns and the sensitive nature of personally identifiable information.
With a new source of truth, there was the potential to more accurately forecast future housing supply needs based on incoming application data.
Problem
Bloom Housing had been successful in centralizing affordable housing listings and applications data within the Bay Area. One of the promises of this effort had always been the value of the insights that could be extracted from the data collected over time. Given the scale of the effort and the diversity of needs, data reporting had remained a manual process with everyone doing it their own way due to liability concerns and the sensitive nature of personally identifiable information.
With a new source of truth, there was the potential to more accurately forecast future housing supply needs based on incoming application data.
Problem
Bloom Housing had been successful in centralizing affordable housing listings and applications data within the Bay Area. One of the promises of this effort had always been the value of the insights that could be extracted from the data collected over time. Given the scale of the effort and the diversity of needs, data reporting had remained a manual process with everyone doing it their own way due to liability concerns and the sensitive nature of personally identifiable information.
With a new source of truth, there was the potential to more accurately forecast future housing supply needs based on incoming application data.
Approach
My approach was grounded in research to understand needs. There was significant interest in using this as an opportunity to experiment with AI as a utility to format, filter, and summarize data.
To ensure we had the trust of the community, I helped facilitate working groups alongside team members from Google.org to inform our efforts. Once we had identified a set of responsible principles and low-risk use cases, we were able to move forward with a pilot.
Approach
My approach was grounded in research to understand needs. There was significant interest in using this as an opportunity to experiment with AI as a utility to format, filter, and summarize data.
To ensure we had the trust of the community, I helped facilitate working groups alongside team members from Google.org to inform our efforts. Once we had identified a set of responsible principles and low-risk use cases, we were able to move forward with a pilot.
Approach
My approach was grounded in research to understand needs. There was significant interest in using this as an opportunity to experiment with AI as a utility to format, filter, and summarize data.
To ensure we had the trust of the community, I helped facilitate working groups alongside team members from Google.org to inform our efforts. Once we had identified a set of responsible principles and low-risk use cases, we were able to move forward with a pilot.



When things went sideways
There were several moments where sessions stalled due to contrary opinions and high levels of mistrust in AI outputs. In these instances, through discussions I found participants had assumed complex solutions that were very high risk to the public.
As a result, we were able to shift to outcomes that utilized AI for more predictable results for internal admins in order to build trust and avoid unreliable public facing experiences.
When things went sideways
There were several moments where sessions stalled due to contrary opinions and high levels of mistrust in AI outputs. In these instances, through discussions I found participants had assumed complex solutions that were very high risk to the public.
As a result, we were able to shift to outcomes that utilized AI for more predictable results for internal admins in order to build trust and avoid unreliable public facing experiences.
When things went sideways
There were several moments where sessions stalled due to contrary opinions and high levels of mistrust in AI outputs. In these instances, through discussions I found participants had assumed complex solutions that were very high risk to the public.
As a result, we were able to shift to outcomes that utilized AI for more predictable results for internal admins in order to build trust and avoid unreliable public facing experiences.
Outcome
Leading design efforts, I worked with MTC to create an initial Data and Reporting Dashboard serving data admins across the 9-county Bay Area:
Applications data dashboard visualizing household income, addresses, and demographics
AI-generated executive summaries based on pre-defined templates utilizing natural language
UX guardrails for AI features including opt-in disclaimers, data methodology statements, and feedback mechanisms
Outcome
Leading design efforts, I worked with MTC to create an initial Data and Reporting Dashboard serving data admins across the 9-county Bay Area:
Applications data dashboard visualizing household income, addresses, and demographics
AI-generated executive summaries based on pre-defined templates utilizing natural language
UX guardrails for AI features including opt-in disclaimers, data methodology statements, and feedback mechanisms
Outcome
Leading design efforts, I worked with MTC to create an initial Data and Reporting Dashboard serving data admins across the 9-county Bay Area:
Applications data dashboard visualizing household income, addresses, and demographics
AI-generated executive summaries based on pre-defined templates utilizing natural language
UX guardrails for AI features including opt-in disclaimers, data methodology statements, and feedback mechanisms
Process
Building trust through understanding
Through research, I came to understand the value of accurately reported data for policy decisions. I learned that it was crucial to prioritize the security of the data due to the sensitive nature of PII. I worked with engineering partners to ensure we had proper safeguards in place.
Finally, I learned that any solution would need to meet stakeholders where they were with existing systems. We worked with internal technical teams to establish an integration plan that would not add additional steps due to already limited capacity.
Process
Building trust through understanding
Through research, I came to understand the value of accurately reported data for policy decisions. I learned that it was crucial to prioritize the security of the data due to the sensitive nature of PII. I worked with engineering partners to ensure we had proper safeguards in place.
Finally, I learned that any solution would need to meet stakeholders where they were with existing systems. We worked with internal technical teams to establish an integration plan that would not add additional steps due to already limited capacity.
Process
Building trust through understanding
Through research, I came to understand the value of accurately reported data for policy decisions. I learned that it was crucial to prioritize the security of the data due to the sensitive nature of PII. I worked with engineering partners to ensure we had proper safeguards in place.
Finally, I learned that any solution would need to meet stakeholders where they were with existing systems. We worked with internal technical teams to establish an integration plan that would not add additional steps due to already limited capacity.



Collaborating on solutions
I learned quickly that any solution would need to address user skepticism and the unpredictable nature AI outputs. We reached out to our partners at Google, to design and conduct a collaborative workshop bringing together housing and technology experts.
I led a session on a responsible delivery process, including ethical guardrails. We were able to communicate the inherent risk and how that could be balanced with a transparent process with shared responsibilities.
I helped conduct several co-design working sessions to identify a set of use cases that were low-risk and high-value. We identified the need for an LLM to assist with data utility functions at scale, including data formatting and natural language summaries.
Collaborating on solutions
I learned quickly that any solution would need to address user skepticism and the unpredictable nature AI outputs. We reached out to our partners at Google, to design and conduct a collaborative workshop bringing together housing and technology experts.
I led a session on a responsible delivery process, including ethical guardrails. We were able to communicate the inherent risk and how that could be balanced with a transparent process with shared responsibilities.
I helped conduct several co-design working sessions to identify a set of use cases that were low-risk and high-value. We identified the need for an LLM to assist with data utility functions at scale, including data formatting and natural language summaries.
Collaborating on solutions
I learned quickly that any solution would need to address user skepticism and the unpredictable nature AI outputs. We reached out to our partners at Google, to design and conduct a collaborative workshop bringing together housing and technology experts.
I led a session on a responsible delivery process, including ethical guardrails. We were able to communicate the inherent risk and how that could be balanced with a transparent process with shared responsibilities.
I helped conduct several co-design working sessions to identify a set of use cases that were low-risk and high-value. We identified the need for an LLM to assist with data utility functions at scale, including data formatting and natural language summaries.



Focusing on first steps
I learned our stakeholders valued data that told stories. The first phase of our work focused on automating how we compiled and maintained previous reporting data. After taking time to normalize, segment, and visualize the data, we experimented with multiple ways to leverage natural language to filter and summarize.
As we integrated AI tooling into the UX, I prioritized ethical and responsible design patterns that would make it clear to users when AI was being leveraged and give users the agency to opt in or out. Patterns that I utilized included exposing the data methodology, disclaimers, and options to provide feedback on generated outputs.
For the executive and data summaries, I utilized templates and constraints that would reduce speculation and focus on concise details. We took time to define the "voice" of the output to be more natural and avoid terminology that would create ambiguity.
Focusing on first steps
I learned our stakeholders valued data that told stories. The first phase of our work focused on automating how we compiled and maintained previous reporting data. After taking time to normalize, segment, and visualize the data, we experimented with multiple ways to leverage natural language to filter and summarize.
As we integrated AI tooling into the UX, I prioritized ethical and responsible design patterns that would make it clear to users when AI was being leveraged and give users the agency to opt in or out. Patterns that I utilized included exposing the data methodology, disclaimers, and options to provide feedback on generated outputs.
For the executive and data summaries, I utilized templates and constraints that would reduce speculation and focus on concise details. We took time to define the "voice" of the output to be more natural and avoid terminology that would create ambiguity.
Focusing on first steps
I learned our stakeholders valued data that told stories. The first phase of our work focused on automating how we compiled and maintained previous reporting data. After taking time to normalize, segment, and visualize the data, we experimented with multiple ways to leverage natural language to filter and summarize.
As we integrated AI tooling into the UX, I prioritized ethical and responsible design patterns that would make it clear to users when AI was being leveraged and give users the agency to opt in or out. Patterns that I utilized included exposing the data methodology, disclaimers, and options to provide feedback on generated outputs.
For the executive and data summaries, I utilized templates and constraints that would reduce speculation and focus on concise details. We took time to define the "voice" of the output to be more natural and avoid terminology that would create ambiguity.












Challenges and learnings
We were all new at this. Design, product, and engineering teams treaded lightly as these were the early days of leveraging new tools. We did not anticipate how much time it would take us to align internally. While we had guiding principles, we spent several sprints getting our bearings and gaining empathy for each other.
We learned from early testing that there was a wide range of distrust for generated outputs, so we scaled back our initial instincts to leverage more speculative outputs in favor of safer and predictable data summaries. We found more success as we moved away from forecasting trends toward more predictable data analysis.
Challenges and learnings
We were all new at this. Design, product, and engineering teams treaded lightly as these were the early days of leveraging new tools. We did not anticipate how much time it would take us to align internally. While we had guiding principles, we spent several sprints getting our bearings and gaining empathy for each other.
We learned from early testing that there was a wide range of distrust for generated outputs, so we scaled back our initial instincts to leverage more speculative outputs in favor of safer and predictable data summaries. We found more success as we moved away from forecasting trends toward more predictable data analysis.
Challenges and learnings
We were all new at this. Design, product, and engineering teams treaded lightly as these were the early days of leveraging new tools. We did not anticipate how much time it would take us to align internally. While we had guiding principles, we spent several sprints getting our bearings and gaining empathy for each other.
We learned from early testing that there was a wide range of distrust for generated outputs, so we scaled back our initial instincts to leverage more speculative outputs in favor of safer and predictable data summaries. We found more success as we moved away from forecasting trends toward more predictable data analysis.
Key takeaways
Building trust when introducing AI into sensitive domains was essential for adoption. By prioritizing stakeholder research, establishing ethical guardrails, and designing initially for utility over speculative insights, we were able to create a foundation for the use of responsible AI in the development of more reliable affordable housing policy.
Key takeaways
Building trust when introducing AI into sensitive domains was essential for adoption. By prioritizing stakeholder research, establishing ethical guardrails, and designing initially for utility over speculative insights, we were able to create a foundation for the use of responsible AI in the development of more reliable affordable housing policy.
Key takeaways
Building trust when introducing AI into sensitive domains was essential for adoption. By prioritizing stakeholder research, establishing ethical guardrails, and designing initially for utility over speculative insights, we were able to create a foundation for the use of responsible AI in the development of more reliable affordable housing policy.