Press "Enter" to skip to content

Are security cameras a security threat?

Smile, you’re on camera!  And in some cases, it’s truly candid.

As current technology has progressed to the point that almost every adult, and most children, wields the power of a smartphone in his hand, the possibility has existed for years that you can be monitored secretly and against your will at any moment.

It’s as though the average person is subjected to the same anonymous scrutiny that celebrities face from overbearing paparazzi anytime they venture out of their homes.

But what happens when you attempt to provide yourself and your family with a measure of security that was once reserved for only the wealthy and voluntarily install security cameras in your home?  You would think you should have a reasonable expectation of personal privacy during the more intimate moments you share, both alone and with your loved ones, right?

Well, according to a recent report from Bloomberg, despite their claims that the short video clips their cameras capture are used to train Artificial Intelligence (AI) algorithms to improve the ability to distinguish between a real threat like a home invader and false alarms like household pet motion, one provider of home security surveillance services, Amazon Cloud Cams, is using a human workforce to train the algorithms behind their motion detection software.

Nowhere in the long and cumbersome Amazon Cloud Cam user terms and conditions does Amazon advise customers of the possibility of human beings possessing the ability to review footage taken from the cameras.  Additionally, the Bloomberg report also states that despite “Amazon’s insistence that all the clips are provided voluntarily, according to two of the people (sources that spoke to Bloomberg on the condition of anonymity), the teams have picked up activity homeowners are unlikely to want shared, including rare instances of people having sex.”

One particularly disturbing issue raised in the report alludes to the fact that, although Amazon has attempted to institute some measures of security on the Cloud Cam annotation operation, including banning the use of mobile phones, some employees have been known to pass captured user footage to non–team members.

Another major problem facing both users and providers of these so-called “smart cameras” is the imperfections being exposed in this still developing technology.  For instance, it is well documented that devices like Amazon’s Alexa and Apple’s Siri have shown the ability to mishear commands at times.  The software powering these smart camera systems is in many ways similar and in some cases intertwined, so the possibility exists that users can easily and inadvertently trigger the device to begin recording footage that will later be reviewed by human employees of the service provider either in real time or later on.

If these aforementioned issues weren’t bad enough, U.K. consumer watchdog group Which published a study recently that also points out the susceptibility of smart camera systems to hacking.  After testing six wireless cameras, Which discovered that due to the use of weak passwords and data being unencrypted, hackers could easily take control of the cameras remotely and spy on people’s homes at will.  In addition, malware like the Mirai Botnet has been known to infect closed-circuit camera systems that work similarly to cloud camera systems like Amazon Cloud Cams and another smart camera system growing in popularity, Ring Cameras.

In fact, a disturbing Amazon review of Victure’s WiFi IP Camera stated, “Someone spied on us.  They talked through the camera and they turned the camera on at will. Extremely creepy.  We told Amazon.  Three of us experienced it, yet they’re still selling them.”

“READ MORE…”

Breaking News: