The shadows and questions lurking behind AI driven Zillow Offers.

At Etiq.Ai the focus is to help organizations to uncover and prevent instances of unintended consequences in Machine Learning. Our VP technical lead, Andy Crouch, has previously blogged about the unintended consequences of bias in black box data models which influence financial and healthcare decisions. AI incidents impact the narrative associated with AI and the business performance as well as sustainability of the company. Further it raises questions about the application of AI. What was the use case for the deployment of AI? What level of risks were identified in the design phase and how was risk mapped to robustness, explainability, fairness and bias in AI? Were risk mitigation strategies identified in the design phase?

AI should be structured from the technical, human-computer interface and an accountability perspective. With an increasing number of organizations globally adopting AI to assist with productivity and efficiencies, understanding the individual components and the interlinkages between this triad is important. While there are no concrete regulatory requirements for conducting such a review, regularly carrying out this review can prime the company for success.

In the last months, an organization which succumbed to the complexities of AI is Zillow. Zillow established its presence in the marketplace as the most-visited real estate website in the United States. It simplified the selling and buying experience, through its Zillow Offers division, so that it was quick, easy and could avoid the uncertainty of a traditional sale and viewing process for home sellers and buyers alike. Zillow Offers employed an iBuying algorithm to facilitate the online estate agent process. However, this algorithm failed to perform at the level that Zillow envisaged. Selling homes and, importantly, ensuring that the seller gets a fair valuation for the property is a complex task that requires complex system thinking and human intervention.

Owning and buying a home is probably the single most expensive activity that an individual and family will undertake in their lifetime. This raises a number of questions about our symbiotic relationships with digital tools and sheds further concerns whether a purely technical solution can replace human intervention without a proper monitoring environment set in place. The lean methodologies adopted by organizations such as Zillow, with their iBuying or instant buying offerings, overlooked the sensibilities that are considered throughout the lifecycle of selling and acquiring real estate. The algorithms deployed couldn’t spot damp, smelly basements or noise pollution in photos or videos, resulting in Zillow covering additional costs for contractors to fix properties and ultimately Zillow struggled to sell properties beyond cost prices. The competitive edge of flipping property around fast and at a profit was lost.

Plato’s Allegory of the Cave, ruminates on the nature of belief versus knowledge. The house pricing datasets, automated housing valuations and predictions used by Zillow Offers shed lights on the the belief that iBuying algorithms could replace the brokered service offered by real estate agents. Up to a few weeks ago, everything in Zillow Offers’ existence had been based on an algorithmic catastrophe which failed to represent objective reality. While the algorithm did not discriminate, the fundamental takeaway is associated with the negative effects that can creep into society through the absence of bridges between technology, humans and accountability.

As businesses increase their reliance on algorithms, appropriate testing and monitoring processes would be required. As the focus on extended accountability and transparency increases in AI, building auditable systems that align competing business priorities is crucial. At, we work with our partners to identify and prevent the unintended consequences of algorithms and AI from day zero.