top of page

Engineer Questions
What is the Hue and saturation about? Will a colorful bathing suit throw it off?
The hue and saturation is for the calibration of the pool's location in the frame of the image. This calibration step saves the pool's coordinates in memory to be used later in processing images. Also, during calibration, a minimum area is required for the program to determine if the 'blue' area is a pool. A person wearing a blue bathing suit will not affect the calibration stage. Examples of pool detection are given in the Progress Log from Week 2.
Does it work at night?
Yes, we will be adding a light to the project for night use. This light will be activated when the motion sensor is triggered during low-light conditions.
Another alternative will be to use a No-IR camera and use an IR LED to light the pool at night.
Pet photo? The pet will be in an orientation different from the photo and wet?
There will be multiple pictures for references in different orientations of the pets.
Will the camera have the resolution for the requirements? What resolution is really needed?
We will be doing testing and we will find this information as soon as possible.
What if the users are away from their phone? That sounds strange in this day and age but does anyone take their phone into the shower?
An alarm will be placed in the project and it will sound when the alarm is triggered.
Why not add solar?
We will be changing our power supply to the outlet instead of batteries so solar will not be necessary.
Other projects trying for facial recognition went away from Arduino because it could not do it. Are you sure it's good enough?
We will be moving away from using Arduino since it does not have the processing power necessary for image processing. The group will be changing their microprocessor to either the Raspberry Pi4.
How is it detecting the pool? Color? User sets outline like in VR?
A calibration button on the app will initiate detection of the pool. The pool's coordinates will be saved in memory for use later in the program.
User setup with tracking balls like with surgical robot setup? Stickers on tiles? Tape outline?
We will test this and possibly use tape to outline.
How do you optimize the 3D tracking?
We will do a performance analysis.
How to address night and low light conditions?
We will be feeding more pictures and examples of people and pets in different light conditions.
Automatic call to 911?
We decided not to add this due to the user already being on their phone and seeing the video feed. Also testing this would be difficult without alerting the police.
Better software integrated into existing security systems like Nest?
This could be a possible project after the semester.
Best-in-class systems misfire all the time (car as person etc) how to improve? Life or death requires higher sensitivity.
We will feed the system as much information as we can with pictures to train it efficiently. Over 100 images will be analyzed through over 100,000 iterates to ensure quality detection.
Why battery? Why not wire it through security lights or outdoor plug common in most houses?
Yes, we will be switching to an AC outlet power source.
bottom of page