AnelaTek Solutions Sdn Bhd is an innovative and unique security technology solutions provider based in Malaysia. We provide state-of-the-art security solutions for government agencies and medium to large commercial enterprises with a need for advanced security technology.

SDM 0525 F3 Feat Slide1 1170x658 1

AI Advances Surveillance

Ahead, industry experts delve into what AI is solving for video surveillance customers, how that AI is trained, and what differences exist between AI being deployed at the edge, in the cloud, or somewhere in between.

AI Video Surveillance screenshot
AI has evolved beyond simple motion detection to understand a range of “normal” behaviors and to only trigger a notification or response if an abnormality is detected. Image courtesy of Just_Super / iStock / Getty Images Plus / via Getty Images
Video surveillance is a facet of the security industry that has deployed algorithms for some time. With the rapid advancement of artificial intelligence (AI), the accuracy of that detection has grown exponentially — as has what can be detected. Ahead, industry experts delve into what AI is solving for video surveillance customers, how that AI is trained, and what differences exist between AI being deployed at the edge, in the cloud, or somewhere in between.

Motion Detection in the Past

The motion detection process traditionally revolves around pixels. “In security, people and vehicles are the two most desired objects in a scene to identify,” says Quang Trinh, business development manager, platform technologies, Axis Communications, Chelmsford, Mass. “Traditional video motion detection is based on algorithms that analyze pixel clusters. With this type of detection, the end customer is able to adjust the targeted pixel density based on what an object would fill in the scene. This approach results in some false positives attributed to pixel changes in the scene from lighting, shadows, and swaying objects in the background.

“With modern AI architectures built on deep learning, detection based on pixel clusters can now be turned into object detection and classification,” Trinh adds. Previously, the process tended toward false positives, as many sources of motion are non-human. “As AI-based object detection and classification become more efficient on edge devices like IP cameras, false positives compared to basic video motion detection are greatly reduced. The environmental factors that resulted in false positives in pixel-based video motion detection are minimized with these types of AI models, since they are built solely to detect and classify people and vehicles.”

Aaron Saks, director of sales enablement, Hanwha Vision America, Teaneck, N.J., adds, “Traditional systems used pixel-based motion, which led to a higher number of false positives and created excessive bandwidth and storage space constraints. Surveillance camera operators can easily get bombarded with too much information — and too many false event alarms — caused by lighting changes, shadows, or leaves blowing in the wind.”

AI Video Surveillance

Deep learning involves exposing the AI to a vast sea of data to create as deep of a knowledge base as possible. Image courtesy of gorodenkoff / iStock / Getty Images Plus / via Getty Images

These models are using similar neural networks to how the human brain would react and learn from experience. … They keep getting trained over and over and they learn from experience.

‘- Priya Serai, Zeus Fire and Security

Everyone Learns Differently

I’m sure we’ve all heard the phrase “everyone learns differently.” Well, given the way AI learning mimics the human mind, then the same must be true — to some extent — with AI. So what are the different ways that AI can learn?

Christopher Zenaty, president, Turing AI, San Mateo, Calif., offers the following insights:

  • Anomaly Detection – AI models learn normal activity patterns in an environment and can quickly identify unusual behaviors (e.g., unauthorized access, loitering, unattended objects).
  • Pattern Recognition – Deep learning enables AI to distinguish between people, vehicles, and objects while recognizing specific actions, such as aggressive movements or suspicious gatherings.
  • Adaptive Learning – Over time, deep learning algorithms refine their accuracy by continuously training on new video data, reducing false alarms and improving event detection.
  • Real-Time Processing – AI-powered deep learning can analyze thousands of video frames per second, delivering instant alerts for potential security threats.

“By leveraging deep learning, modern AI surveillance systems enhance security, automate threat detection, and minimize human monitoring efforts — ultimately leading to faster response times and better protection,” Zenaty says.

Today’s AI can be trained beyond the simplicity of registering changes in pixels.

“AI-based models improve over time as feature extractions from people and vehicles add context to the scene,” Trinh says. “An AI-based model that detects people would also look for hands, legs, upper/lower clothing color, head, face, and other features that distinguish people from other moving objects. For vehicles, sub-features such as vehicle types like bicycles, motorcycles, cars, SUVs, trucks, vans, and buses can isolate specific vehicle types that are of importance in a scene.”

AI is looking to identify problems in real time and even prevent problems from arising, but it is also deployed to increase the efficiency of sifting through video post-incident. Instead of human beings using hours and hours to sift through footage, AI can be deployed to search a video backlog in a fraction of that time.

“AI has also been deployed in what we call Smart Video Search,” says Dean Drako, CEO, Eagle Eye Networks, Austin, Texas. “As you know it can take a very long time for a business owner or security director to sift through video to find the person, object or vehicle they want to find. AI-powered smart video search works just like searching the web. You type in a search term like ‘red Honda car’ or ‘man with backpack’ and it instantly brings up the video you’re looking for.”

How the Training Works

In a way, AI learns much like we do. It is given a large data-set — or examples — of behaviors that it deems to be within a “normal range.” The AI is then introduced to countless examples of behavior that falls outside of that range, so that it can have as complete an understanding of what behavior constitutes a problem or signals that one may arise.

“Deep learning uses neural networks behind the scenes,” says Priya Serai, chief information officer, Zeus Fire and Security, Paoli, Pa. “Neural networks really are like layers and layers of algorithms that process data in a way that is very similar to the human brain. That’s why with generative AI models like chatGPT you get responses that are similar to human responses. These models are using similar neural networks to how the human brain would react and learn from experience. It’s not like you train them once and it’s done. They keep getting trained over and over and they learn from experience.”

Dave deLisser, vice president, product management, IDIS Americas, Coppell, Texas, gives the following examples from the retail market: “A deep learning model trained on footage of a retail store might, for example, recognize the natural flow of customers through and around the store,” he says. “It alerts operators accordingly when it detects abnormal behavior, such as a person loitering or a large crowd forming. This makes it far more proactive by enabling early threat detection or emergency detection, enabling security teams to act faster before losses or accidents occur.”

The benefits of automating this process are numerous. “Until the advent of deep learning, surveillance operators monitored and followed objects in real time, such as people or cars, often across multiple video streams,” deLisser says. “This led to operator fatigue and missed events, especially when viewing multiple camera feeds from busy scenes. Today, deep learning algorithms can accurately detect and categorize objects using spatial-temporal patterns in video data. This capability increases the effectiveness of surveillance systems. It also aids various tasks such as identifying suspects in criminal investigations, traffic patterns for better urban and city planning, and heatmaps and directional footfall for optimizing casino, hospitality or store layouts.”

Tim Palmquist, vice president, Americas, Milestone, Oswego, Ore., says that Milestone is rolling out a new data insights project, “What we call project Hafnia, which is our data insights product,” he says. “Data insights is the new frontier for the future of our industry because quality training data to make the AI products actually perform is what’s needed. Today we have a lot of promises but not a lot of results. As we improve the quality of the training data, then you will get much better results on the other side.”

This project was born out of the search for quality training data, Palmquist says. “The problem is their tools have been trained on generic data instead of the higher quality data of real life,” he says. “So what Milestone has proposed is what if we can get a lake of data, real life data, we can do it in responsible way so we are protecting privacy, but we’re providing excellent quality data to train AI models. Wouldn’t that help move the industry faster along?”

Continued Service From the Integrator

What about the lifecycle of the AI: How are updates handled? What role does the integrator play in this? The answer is — as it so often is — it depends.

“In most edge-based surveillance systems, no training is needed on the user side,” Saks says. “It’s all done during software development and shipped with the product. There could be updates over time to enhance the model and improve its performance, but that’s just part of the normal installation and maintenance process to keep everything up to date. Some systems allow users to do custom training on a new type of object.”

video surveillance

AI can significantly reduce the workload on the employees of an organization by streamlining tasks like video search. Image courtesy of Avigilon

Security dealers and integrators can enhance model accuracy by regularly providing updated training data, implementing adaptive learning systems, and incorporating feedback loops where users label false positives or negatives to refine performance.

‘- Satish Raj., Pro-Vigil

Saks continues, “There also are server-based products that can learn over time or, again, custom train on an unknown object. Other systems can add training on the fly, but it’s simply not going to be as in-depth as factory-based training because you have a much smaller data set. The best approach is following best practice guidelines of camera installation parameters.”

Trinh offers the following, “Will security dealers be able to offer this custom AI training as a service in the future? Absolutely,” he says. “There are already open-source platforms that allow security dealers to monetize and build a business from developing custom AI models. To capitalize on this opportunity, the security industry will need to develop a specialized workforce capable of tackling AI-specific tasks, but the opportunity is there.

“The opportunity is also there for vendors and security dealers to establish industry-wide guidelines around the ethical use and development of AI,” Trinh continues. “Until legislation is in place at the global, federal, and state levels, AI’s ethical and responsible use relies on self-governance.”

Hamish Dobson, corporate vice president, Avigilon & Pelco Product, Motorola Solutions, Chicago, says, “Meanwhile, some manufacturers offer means for fine tuning in the field. This allows for better accuracy over time. Both dealers and end users can typically select a number of false and true events and teach the device to do better next time.”

As for the role that the integrator plays, Satish Raj, chief technology officer at Pro-Vigil, San Antonio, Texas, says, “Security dealers and integrators can enhance model accuracy by regularly providing updated training data, implementing adaptive learning systems, and incorporating feedback loops where users label false positives or negatives to refine performance.”

The AI Advantage

To state it more explicitly, the video surveillance market is being improved in terms of accuracy and efficiency thanks to AI and deep learning. “Across many vertical sectors, customers quickly recognized the potential of AI video to improve their security posture further and tackle other operational challenges,” Dobson says. “As we continue to invest in innovative AI-enhanced solutions, we view the applications as almost limitless, particularly as the accuracy and power of AI continue to advance rapidly. From crime prevention, health and safety, regulatory compliance, improved customer service, and in-store and customer behavioral intelligence, users constantly find new uses for established analytics tools and ask for new capabilities.”

Speaking to health and safety and regulatory compliances in addition to crime prevention, Raj adds, “By automating the tracking of vehicles and people, AI reduces the need for constant manual monitoring, freeing up staff for higher-priority tasks. Many customers also seek AI surveillance for compliance and safety enforcement, such as ensuring that employees adhere to safety protocols like wearing required protective equipment in hazardous work environments.”

Looking to the future, Scott R. Elkins, CEO, Zeus Fire and Security, says, “As the manufacturers’ technology advances, there will be more AI onboard — whether it’s a camera or a network video recorder (NVR). The question that dealers will have to ask and answer — whether they’re small, medium or large, whether they’re integrators or traditional security alarm companies — is are they going to be in the business of creating unique cases that customers want relative to their specific market? I guarantee you that grocery versus retail versus health care versus schools will all have unique use cases and specific needs that won’t happen out of the box.”

At the Edge vs. In the Cloud

AI can be deployed both on camera and in the cloud. Sometimes AI is deployed exclusively in either location and in other cases it can be layered one onto the other. But what are the key differences?

“The advantages of AI-powered video surveillance depend on the customer’s tolerance for latency in real-time data analysis versus after-the-fact incident detection,” says Quang Trinh, Axis Communications. “Fortunately, AI at the edge is becoming more accessible and less costly, enabling AI models to run on far edge devices like IP cameras or on on-premise edge servers that support object detection and classification.

“New AI architectures, such as multi-modal models, combine text, language, images and videos to unlock more capabilities,” he says. “While these new models will require more computing, cloud-based versions of those multi-modal models can provide value to an existing security system.”

Aaron Saks adds, “For Hanwha Vision, there are three main ways: on the edge of the camera, cloud-based, and server based. Our main ‘go-to’ approach is edge-based, meaning I don’t need to stream to the cloud; I don’t need internet connectivity. Yes, the cloud can scale up quickly, it’s very elastic. But typically, any cloud system will have costs per camera per month because they have to ingest and process data. The other big differentiator between edge and cloud is with a cloud system, data has to egress a site, go into the cloud, get processed, and then come back. Ultimately, neither one is better than the other. It’s a matter of different capabilities for different applications. Server-based systems have large initial costs due to the processing capabilities required and may have limitations on the total number of cameras or megapixels it can process, based on the types of analytics and objects it is detecting.”

Finally, Dave deLisser, IDIS Americas, says, “While AI vision processing at the edge and in the cloud offer similar benefits, one main difference is where the video analytics are running, and each has its advantages and challenges. Edge systems with distributed intelligence provide scaling agility without overloading a network. With every edge device, video is analyzed locally. If a customer needs further cameras, they are easy additions to surveillance systems that won’t lead to an overloaded cloud server or network or recurring license fees, as with many VSaaS models.

deLisser continues, “Cloud systems are controlled and stored centrally, enabling customers to easily manage multiple sites from one interface. The challenge with the cloud, however, is that while it’s highly scalable for storage and processing power, there can be bottlenecks if the network cannot deal with large data transfers, especially with a large number of remote cameras, which in turn can lead to delayed alerts to potential security and safety threats. For mission critical security environments, cloud-based surveillance becomes prohibitive, where every split-second counts in identifying, verifying and responding to a potential security or safety event.”

Disclaimer – This post has only been shared for an educational and knowledge-sharing purpose related to Technologies. Information was obtained from the source above source. All rights and credits are reserved for the respective owner(s).

Keep learning and keep growing

Source: SDM Magazine

SDM Magazine Logo & Brand Assets (SVG, PNG and vector) - Brandfetch

Share:

Category