While the benefits of the so-called “smart home” may be obvious to the electronics industry, things aren’t quite so clear to the average consumer. The key to acceptance, adoption, and even enthusiasm will be the applications and services the smart home will deliver. One critical enabler for that will be real-time sensing, allowing the Internet of Things (IoT) to decide exactly what the “things” need to do. Widely deployed sensing becomes the gateway to an Internet of awareness that will feed the IoT, delivering the right services at the right time to usher in a new season of efficiency and convenience that the consumers will be more than willing to purchase.
How Smart is Smart?
As the IoT makes its first tentative steps into our homes, there is a tendency to define the “smart home” in terms of the more visibly interactive examples of the technology, such as the voice-activated devices from Amazon and Google that are becoming commonplace. But do the flashy lights or a soothing voice reading us the day’s headlines really constitute “smart?”
The more subtle (and ultimately powerful) starting point is along the lines of the initial Nest thermostat, where the homeowner would actually spin the dial (gasp!) to adjust the temperature, and by doing so, they are also teaching the device their preferences. By collecting and analyzing individual data-points of time, temperature, and presence, patterns begin to be recognized. With that pattern recognition comes pattern prediction. In the case of the thermostat, a correct prediction is rewarded by a lack of user intervention, while wrong ones are quickly corrected by the user in real time. No after-the-fact analysis needed. The system learns, thereby introducing the first level of actual “smarts.”
The Sensing Elements
No one (hopefully) makes decisions in a vacuum. Data is required. In the case of smart homes or other smart spaces, as we consider the kinds of data the IoT can make use of, we can build the list of needed sensing from the application view or enabling view. The application view starts from visualizing a use-case, then plugging in the technological elements we believe can solve the matter. If we want to count people, for instance, we can consider the types of sensing options available, including passive-infrared, direct/higher resolution image sensing, low-resolution image sensing, time of flight, or something else. The category of “something else” is, of course, the tricky one as it suggests something that has yet to be associated with that type of application.
An alternative approach would be to conceptually envision throwing sensors into the space, and in so doing, enable collective creativity to start the process of invention from there. We essentially watched that type of approach in the smartphone market space. While there was usually at least one use-case associated with a particular type of sensor added to the mobile device, understanding every use-case was not a requirement. It was more of a “build it and they will come” approach. Add a GPS sensor for the known map application and let’s see what the app makers come up with from there. In short order, we see navigation applications with crowd-sourced traffic updates, discount coupons being pushed at the phone user as they drive near a shopping mall, and even video games where you virtually chase invisible characters and in reality sometimes actually fall off a cliff. (How many of us can claim to actually have predicted more than the first one?
The Sensing Platforms
At first glance, one could assume our smart home sensors should simply be added to the appliance that makes use of them. With that, we expect to see something like appliance temperature/humidity and open/close Hall-effect sensors in the refrigerator, ambient temperature/humidity in the HVAC thermostat, and smoke/fire in the kitchen hood and in some scattered smoke detectors. If we think about the constant elements that make up our homes (smart or not), we may get a better result by combining sensors in seemingly unrelated hosts.
Walls, windows, furniture, lights, and (when occupied) people are all constants in our built space. The occupants already serve as one type of mobile sensor platforms when they walk into a space carrying their smartphone, or wearing their smartwatch. By applying energy harvesting techniques, wall materials could be in-built with useful sensors such as humidity or motion detection.
Lights, whether in a fixed mounting or as a bedside table lamp, will be ideal platforms for a broad variety of sensing. Scattered everywhere and connected to power all the time, they can serve as remarkable hosts for HVAC temperature and humidity sensors that could move off the wall, away from the wires and batteries, while sharing the BOM cost of the networking module with the connected light. Presence, motion, activity, identity, and ambient light represent a variety of disparate functions that will usefully integrate into tomorrow’s smart lighting platform.
Almost Ready Now
While some of the sensing elements that enable all this are just arriving on the scene, it is important to note we already have the most critical value-point enabler in place. Thanks to the smartphone revolution, the cloud-based applications platform infrastructure has already been invented and deployed. That application/big data platform will drive smart home decision/response patterns in real-time and predictively just as it does when our smartphone navigator happily pops up at 7:30 a.m. to ask if we’re going to drive to work. Prediction is at work already. Will we see more apps and services, and will they improve? Naturally. Improvement will drive user satisfaction, and satisfied users turn over more data. This presents an opportunity to refine algorithms, and increase the opportunity to monetize what is learned. The cycle of success and value feeds itself.
As the IoT drives massive growth in deployed sensor counts, the added data flow can’t be dismissed as a simple challenge. The good news is that it isn’t hard to recognize the cost-savings that will stem from integrating sensors into already smart appliances (such as lights) that will pick up a big part of the load. Since the appliance doesn’t have to work all that hard to keep up with humans, there will be plenty of time and computer-power available to implement pre-processing and so called “fog” computing. I don’t need to know what the temperature is right now, as much as I need to know when it moves more than the pre-set threshold from the target. Until then, no updates required. The cloud continues to process the temperature input as 75-degrees.
Circling back to the original premise- while we can deliver enabling technologies and visualize some of the apps, how do we drive acceptance to the convenience-loving, hassle-hating consuming public? The answer comes by combining two simple elements—simplicity and value. With regard to simplicity, all eyes turn towards our Silicon Valley (and elsewhere) networking champions. Much as we have seen TCP/IP evolve from a painful, expert-only configuration nightmare into today’s plug-play-and-forget-it self-configuring dream, we can expect the same from the IoT in our smart home. Anyone who has set up a ZigBee-based lighting control set has at least glimpsed the start of it. Plug it in, open the app, push the button on the bridge (one-finger security), and OK the addition of new lights into to the network. This becomes a variant of the same “build it and they will come” philosophy from earlier.
All the key players in networking, communications and data aggregation agree publicly on the treasure trove of opportunity that the IoT and smart home represent, and they are collectively driving the standards, consortiums, and platform technologies that will assure a market success.
The smart home will work and users will quickly accept the services as valuable. The key will be the deployment of sensors, which no one will really want to buy as standalone elements. What they will want to buy are the platforms that contain them, such as smart appliances, innovative smart lighting, and other intelligent “things” that provide a benefit, and where still more sensors can be integrated. Build it, and the apps will come.
SIDEBAR: Applications vs. Enabling Sensor Tech
For some of what we can expect from the application view, identity (RFID or visual recognition), presence, and people counting will trigger customization features. Temperature and humidity sensors directly address comfort factors in our home, and factor into energy efficiency operations. Air quality sensing will measure CO2 and volatile organic compounds (VOCs), and adapt the fresh air circulation to balance energy efficiency and our health. Smoke and CO sensors will keep a silent vigil over our children and pets, and eventually we’ll see different types of bio-monitoring ranging, perhaps, from pathogen-detection to heart rate detection. Hazardous situation monitoring can be accomplished by mid-IR sensing for flame or specific types of gas signatures.
From the enabling technology view, silicon-scale spectrometry has become a cost-effective reality, and opens the door to detecting all kinds of interesting things. As to what those are, the ideas are really just starting to form (multi-spectral sensing at the home/consumer type of price point really hasn’t been a reality, so the “what can you detect” is just starting to form up. Affordable, high-precision barometric pressure sensing could have a place in detecting occupancy events, or for indoor location purposes to determine if “something” in mounted at floor, wall, or ceiling height. We know microwave and time-of-flight sensing can give us presence, motion, and posture (laying down, sitting down, standing), but what else might they tell us? Magnetic sensing is used in a variety of applications, but maybe we can detect shifts in the family’s aura? Perhaps not, but we have seen often enough that the app can suddenly appear simply as a result of the critical enabling technology arriving on the scene.