Blog Details

A safe environment or an environment of fear?

Worker safety has become a top priority for most companies. Technology plays a big role in improving workplace safety and it does so in all kinds of ways. But does using technology create an even bigger risk than it solves?

Some uses of technology are relatively uncontroversial and are widely used as security precautions. The use of drones and robots to perform tasks too dangerous for humans or in environments where humans simply cannot survive is widely accepted. Other uncontroversial technology includes devices that monitor assets and tools in real-time (reducing the amount of maintenance needed); the implementation of emergency alert systems; and the use of virtual training to train employees as to procedures in dangerous situations.

On the other hand, monitoring employees to improve their safety comes with risks that make it a controversial practice. Monitoring can take many forms. Visual sensors can detect potential hazards such as a worker appearing fatigued or not wearing protective equipment. Alternatively, biometric sensors can monitor the heart rate or heat stress levels of employees. The sensors coupled with the predictive power of an AI, create personalized alerts so individuals may correct their behavior before an accident occurs.

Fujitsu has monitored the heat stress of its employees as far back as 2017. Employees wear arm-band sensors which send an alert if they are in danger of excessive heat stress, and these are particularly useful for those who work outdoors during the hottest months of the year. With the rise in global temperatures and discussions all over the world about working in the new levels of extreme heat, monitoring employees’ temperatures could be a proactive way of ensuring their health and safety.

Another form of monitoring used to protect against harassment involves monitoring the work produced by employees. An AI program is used to scan through emails and other documents looking for inappropriate content that could be flagged up on the system. Many workplaces stress the importance of creating an environment where workers feel safe on an emotional level.

However, the increased surveillance of employees does not come without risks to the privacy and trust placed in employees. The boundaries between monitoring for safety and monitoring for productivity must be drawn. Often companies implement measures under the guise of improving worker safety but are in fact measuring worker productivity.

Not only does monitoring negatively affect workers, but studies have also shown that it tends to backfire on companies that implement it. A study published in 2021 showed how monitored employees were substantially more likely to take unapproved breaks and to purposefully work at a slower pace. It is becoming more widely acknowledged that placing trust in workers and giving them greater flexibility in the way they work increases productivity. If an employee is simply not working then a company should be able to tell by their output, not by how many minutes they have been inactive on their computers as often statistics produced by monitoring software is inaccurate. Employee monitoring also tends to create resentment amongst workers and does not create a healthy workplace environment. Micro-managing, productivity paranoia and increased surveillance control simply do not work.

Thus, if companies choose to monitor their workers for safety reasons in any way, it is important that this information is used only for health and security measures, not productivity. Furthermore, the company needs to make this clear and put clear and effective protections in place to ensure this. While it is important to ensure worker safety and technology is a great way of achieving this, nobody wants to live or work in a place where they feel as if big brother is always watching…

smartR AI has created alertR, a highly secure and private AI platform to help individuals monitor their health. Private AI for health is a rapidly expanding market, but with alertR you can be sure of the highest level of security through its chain of trust which only allows trusted devices to decode the encrypted data. Contact smartR AI for AI you can trust.

 

Written by Celene Sandiford, smartR AI

Popular Category

Popular Category

No posts found!