Every office has a plant that seems destined for tragedy. Too much water, not enough water, forgotten over a long weekend… it’s a cycle we all know. Instead of resigning ourselves to droopy ferns, we asked: what if the plants could just tell us what they needed?
That question turned into a surprisingly rich engineering challenge. A Raspberry Pi wired to soil-moisture and air sensors became our plants’ voice. AWS IoT Core gave us a secure channel to send data upstream, S3 became our data lake, and Lambda + EventBridge stitched the system into something that could think, react, and notify. To cap it off, we gave the plants a Slack persona, “Audrey,” a nod to Little Shop of Horrors, who now politely (and sometimes dramatically) begs for water in our team channels.
It may sound silly, but that’s the point. By wrapping serious technology in a playful use case, we created a sandbox where learning felt natural. There’s no better way to internalize a new tool than by using it to solve a problem you actually care about, even if that problem is just keeping the office greenery alive. Fair warning, this blog is about to get technical - so buckle up, and let’s get into it.
Our work at Red Oak Strategic is centered around using Amazon Web Services (AWS) tools and software to build custom data solutions for our customers. We lean on that skillset here in developing our plant app. In the architecture above we highlight how we sync our “Infrastructure of Things” (IOT) technology, a fancy term that basically stands for a physical product that can connect to the cloud: In this case, to our internal AWS account where we store, analyze, and report that data out to our team via Slack.
The main AWS resources we use here are as follows:
-AWS IoT Core: registers and allows us to monitor the health and activity of the Raspberry Pi. This technology is designed to handle commercial applications that scale to millions of concurrent devices reporting hundreds of metrics per second. In this case we use the same process on a much smaller scale
-Amazon Simple Storage Service (S3): can hold virtually limitless amounts of data in virtually any format. Red Oak uses S3 to replace expensive, old-school databases with “data lakes” built exclusively on S3 for a fraction of the cost. In this case, our data is delivered as well structured data tables in csv format, costing us less than 1 cent per month.
-Amazon EventBridge: monitors our S3 storage and triggers analytics processes when new data is delivered. This process allows us to only spend money on analytics when there is work worth doing. EventBridge also has the ability to start jobs on a schedule with more traditional “cron” expressions.
-Amazon Lambda: the workhorse of our architecture. Lambda is “serverless” compute, which means instead of needing to have a server sitting in a warehouse costing money while your team and your data are sleeping, Lambda turns on in milliseconds, charges by the millisecond of use when needed, and then shuts down, following the principle of “pay for what you use,” which is a key philosophy of AWS best practice.
-Amazon API Gateway: used to process requests from our Slack bot and our users, who can request updates on soil temperatures and get an answer back in seconds. API Gateway is the access point that handles these webhook-based requests and behind the scenes they send those messages to serverless lambda functions to do the analysis and send a message back
In order to measure and communicate the health of your plant to Slack and AWS, you’ll need a small computer with the ability to attach a soil sensor and link to wifi. For our system we used a Raspberry Pi bundle in addition to an Arduino that can mount a custom soil sensor. Photos are below each section. We begin by setting up the Raspberry Pi:
On the Pi, we talk to an Arduino microcontroller via USB serial to request a single reading and timestamp it. The Arduino acts as an intermediary to an AM2302 air sensor and soil moisture sensor. The Pi collects data every minute, and only pushes to AWS hourly. Since we’re working with a remote sensor, we need to place higher emphasis on hardware or network failures – the Pi only clears its minute-resolution data cache on successful PUT on S3.
Before we get into code, let's touch on using VS Code’s remote-SSH connection. This lets us use VS Code to develop and run code directly on the Pi from our regular laptops – we can even pull/commit from git and edit our cron jobs directly on the Pi!
Here’s the bottom-up code stack:
On the Arduino we have a very basic sketch that returns sensor data on request:
This is the every-minute job:
Publish each reading to AWS IoT Core using the Device SDK (MQTT over mutual TLS). Use QoS 1 and a narrow IoT policy.
The Raspberry Pi's puts to S3 trigger our main notification process. S3 create events trigger a Lambda that collects the latest soil moisture levels and posts low soil moisture warnings to Slack. This design prevents the Lambda from running without new data, but with the drawback that it might run continually over the weekend. To counter this, the Lambda takes an environmental variable called DOW_MASK (day of week mask) and timezone (surprisingly enough, a UTC 8601 timezone name) that evaluates at runtime, and can exit early if the Lambda should be suppressed on any particular day.
To create your slack app you’ll need to sign up as a slack developer and have to appropriate scopes in your slack account to create, attach, and add apps to your workspace. This walk through does a great job of explaining that proccess.
This pattern isn’t ideal for a large system (for example, if we can’t control the edge device’s upload schedule), but the DOW_MASK pattern (which could be extended to time of day) might be helpful for suppressing or batching alerts to certain times of day, while still enabling event-driven actions during certain hours (perhaps suppressing a processing job until overnight if using spot-prices?).
Schedules run Mon–Thu around 9:15 AM and 4:10 PM. The notifier Lambda applies thresholds, uses a cooldown, and posts to Slack via chat.postMessage.
Expose `/audrey soil` and `/audrey lastwater`. Validate Slack signatures and reply quickly; use response_url for longer tasks.
Capture air-dry and fully saturated readings per pot, map to % moisture, and define buckets (e.g., DRY <35%, MEDIUM 35–60%, WET >60%).
By the end of this project, our ferns had become testbeds for a real IoT pipeline. Every piece of the system, from device provisioning with X.509 certificates to event-driven processing and secure data storage, mirrors the architecture of production-grade solutions. The same approach we used to water plants could just as easily monitor industrial equipment, track environmental data, or manage smart devices in the home.
What makes projects like this so valuable is the low stakes combined with high fidelity. If the system fails, the worst that happens is a droopy fern. But in the process of debugging, we’ve stress-tested IAM policies, secured S3 buckets, validated sensor calibration, and learned how to orchestrate multiple AWS services into a cohesive flow. That experience compounds, so that the next time we tackle a mission-critical client system, we bring this confidence and muscle memory with us.
It’s a lot like learning a new language. Reading vocabulary lists only takes you so far; fluency comes from immersion. Ordering food, making small talk, stumbling through mistakes until the patterns stick. In the same way, we don’t become fluent in IoT, serverless, or event-driven architectures by skimming docs alone. We need to build, break, and live inside the system until knowledge is second nature.
And maybe most importantly: this was fun. It reminded us why many of us became engineers in the first place: not just to solve problems, but to experiment, to create, and to make things a little more delightful. Technology sticks when it’s lived, not just studied. In this case, we happened to live it through the eyes (and thirst) of a Slack-savvy office fern.